How AI is Mishandled to Become a Cybersecurity Risk

The rapid evolution of artificial intelligence algorithms has turned this technology into an element of critical business processes. The caveat is that there is a lack of transparency in the design and practical applications of these algorithms, so they can be used for different purposes.

360 Mobile Vision - North & South Carolina Security products and Systems Installations for Commercial and Residential - $55 Hourly Rate. ACCESS CONTROL, INTRUSION ALARM, ACCESS CONTROLLED GATES, INTERCOMS AND CCTV INSTALL OR REPAIR 360 Mobile Vision - is committed to excellence in every aspect of our business. We uphold a standard of integrity bound by fairness, honesty and personal responsibility. Our distinction is the quality of service we bring to our customers. Accurate knowledge of our trade combined with ability is what makes us true professionals. Above all, we are watchful of our customers interests, and make their concerns the basis of our business.

Whereas infosec specialists use AI for benign purposes, threat actors mishandle it to orchestrate real-world attacks. At this point, it is hard to say for sure who is winning. The current state of the balance between offense and defense via machine learning algorithms has yet to be evaluated.

There is also a security principles gap regarding the design, implementation and management of AI solutions. Completely new tools are required to secure AI-based processes and thereby mitigate serious security risks.

Increasingly intelligent autonomous devices

The global race to develop advanced AI algorithms is accelerating non-stop. The goal is to create a system in which AI can solve complex problems (e.g., decision-making, visual recognition and speech recognition) and flexibly adapt to circumstances. These will be self-contained machines that can think without human assistance. This is a somewhat distant future of AI, however.

At this point, AI algorithms cover limited areas and already demonstrate certain advantages over humans, save analysis time and form predictions. The four main vectors of AI development are speech and language processing, computer vision, pattern recognition–in addition to reasoning and optimization.

Huge investments are flowing into AI research and development along with machine learning methods. Global AI spending in 2019 amounted to $37.5 billion, and it is predicted to reach a whopping $97.9 billion by 2023. China and the U.S. dominate the worldwide funding of AI development.

Transportation, manufacturing, finance, commerce, health care, big-data processing, robotics, analytics and many more sectors will be optimized in the next five to 10 years with the ubiquitous adoption of AI technologies and workflows.

Unstable balance: The use of AI in offense and defense

With reinforcement learning in its toolkit, AI can play into attackers’ hands by paving the way for all-new and highly effective attack vectors. For instance, the AlphaGo algorithm has given rise to fundamentally new tactics and strategies in the famous Chinese board game Go. If mishandled, such mechanisms can lead to disruptive consequences.

Let us list the main advantages of the first generation of offensive tools based on AI:

  • Speed and scale: Automation makes incursions faster, expands the attack surface and lowers the bar for less experienced offenders.
  • Accuracy: Deep learning analytics make an attack highly focused by determining how exactly the target system’s defenses are built.
  • Stealth: Some AI algorithms leveraged in the offense can fly under the radar of security controls, allowing perpetrators to orchestrate evasive attacks.

At the same time, AI can help infosec experts to identify and mitigate risks and threats, predict attack vectors and stay one step ahead of criminals. Furthermore, it is worth keeping in mind that a human being is behind any AI algorithm and its practical application vectors.

Attacking vs. defending systems using AI

Let us try to outline the balance between attacking and defending via AI. The main stages of an AI-based attack are as follows:

  • Reconnaissance: Learning from social media profiles, analyzing communication style. By collecting this data, AI creates an alias of a trusted individual.
  • Intrusion: Spear-phishing emails based on previously harvested information, vulnerability detection through autonomous scanning, and perimeter testing (fuzzing). AI quickly discovers the strongholds of the target’s security posture.
  • Privilege escalation: AI creates a list of keywords based on data from the infected device and generates potential username-password combinations to hack credentials in mere seconds.
  • Lateral movement: Autonomous harvesting of target credentials and records, calculation of the optimal path to achieve the goal, abandonment of the Command and Control (C2) communication channel; this increases the speed of interaction with the malware dramatically.
  • Completion and result: AI can identify sensitive data based on context and use it against the victim. Nothing but the necessary information is extracted, allowing the attacker to reduce traffic and make the malware harder to detect.

Now, let us provide an example of how AI can be leveraged in defense:

  • Security enhancements: Identifying and fixing software and hardware vulnerabilities, code upgrades using AI to protect potential entry points.
  • Dynamic threat detection: Active protection capable of detecting new and potential threats (as opposed to traditional defenses relying on historical patterns and malware signatures); autonomous detection of malware, network anomalies, spam, bot sessions; next-generation antivirus.
  • Proactive protection: Creating “honeypots” and other conditions to make it problematic for intruders to operate.
  • Fast response and recovery: Automatic real-time incident response and threat containment; advanced analytics facilitating human efforts in investigation and response; quick recovery from a virus attack.
  • Competence: The use of pattern recognition and analytical capabilities of AI in forensics.

The expanding range of attack vectors is only one of the current problems related to AI. Attackers can manipulate AI algorithms to their advantage by modifying the code and abusing it at a completely different level.

AI also plays a significant role in creating Deepfakes. Images, audio, and video materials fraudulently processed with AI algorithms can wreak information havoc making it difficult to distinguish the truth from the lies.

In summary: What security solutions are required?

To summarize, here are the main challenges and systemic risks associated with AI technology, as well as the possible solutions:

The current evolution of security tools: The infosec community needs to focus on AI-based defense tools. We must understand that there will be an incessant battle between the evolution of AI attack models and AI defenses. Enhancing the defenses will be pushing the attack methods forward, and therefore this cyber-arms race should be kept within the realms of common sense. Coordinated action by all members of the ecosystem will be crucial to eliminating risks.

Operations security (OPSEC): A security breach or AI failure in one part of the ecosystem could potentially affect its other components. Cooperative approaches to operations security will be required to ensure that the ecosystem is resilient to the escalating AI threat. Information sharing among participants will play a crucial role in activities such as detecting threats in AI algorithms.

Building defense capabilities: The evolution of AI can turn some parts of the ecosystem into low-hanging fruit for attackers. Unless cooperative action is taken to build a collective AI defense, the entire system’s stability could be undermined. It is important to encourage the development of defensive technologies at the nation-state level. AI skills, education, and communication will be essential.

Secure algorithms: As industries become increasingly dependent on machine learning technology, it is critical to ensure its integrity and keep AI algorithms unbiased. At this point, approaches to concepts such as ethics, competitiveness, and code-readability of AI algorithms have not yet been fully developed.

Algorithm developers can be held liable for catastrophic errors in decisions made by AI. Consequently, it is necessary to come up with secure AI development principles and standards that are accepted not only in the academic environment and among developers, but also at the highest international level.

These principles should include secure design (tamper-proof and readable code), operational management (traceability and rigid version control)  and incident management (developer responsibility for maintaining integrity).

How AI is Mishandled to Become a Cybersecurity RiskDavid Balaban is a computer security researcher with over 17 years of experience in malware analysis and antivirus software evaluation. He runs and projects that present expert opinions on contemporary information security matters, including social engineering, malware, penetration testing, threat intelligence, online privacy, and white hat hacking. Mr. Balaban has a strong malware troubleshooting background, with a recent focus on ransomware countermeasures.

The post How AI is Mishandled to Become a Cybersecurity Risk appeared first on eWEEK.

By admin