Automation and AI as a Force Multiplier for Offensive Operations

Automation and AI as a Force Multiplier for Offensive Operations

Some newer terms are getting a lot of buzz lately, like adversarial AI. This means using AI to assist with existing LLM algorithms. This disrupts those original AI-based solutions put into play by the defenders, allowing for faster, stealthier, and more targeted attacks that circumvent traditional security mechanisms than those we have seen in the past. To learn more, download the SANS 2024 Top Attacks and Threats Report here.

Although the terminology may be new, using machines to identify vulnerabilities in hardware and software and attack other machines based on their findings is not a new concept. A 2016 DARPA challenge asked researchers and universities to do just that. The challenge was to identify vulnerabilities and automate patching and/or weaponization of the vulnerabilities based on the findings. Although this “challenge” was exciting for its time, it still required skilled humans vetting and understanding the nuances of the unique bugs that were uncovered.

But the biggest threat in this scenario, is how little time it now takes to discover and consequently weaponize vulnerabilities. If enterprising attackers can locate a vulnerability and weaponize it faster than the patch cycle allows, they’ve met their objective. So, for defenders, this is a real threat that will prove to be difficult to defend against because the efficacy of adversarial AI will only continue to improve.

This presents some real-world scenarios, though, that we need to consider, like the concept of an AI arms race. Will defensive AI be able to defeat the number of AI attacks now flooding the environment thanks to the advances in machine learning? Right now, it seems like adversarial AI has the upper hand. What’s alarming is that attackers know organizations are unable to defend against an AI-based attack due to lack of proper tooling and skill sets required to do the job, and they will seek to exploit these weaknesses to their benefit.

It has catapulted the rise in cybercriminal groups as well as nation-states that are leveraging AI to poke holes in our current network defenses, most worryingly those around critical infrastructure such as power grids. And this problem won’t be slowing down anytime soon. While US-based companies struggle to find skilled employees with extensive knowledge in AI and ML, nation-states and criminal organizations are actively recruiting folks with the same skills to do work for the dark side. In their latest year end Global Threat Intelligence Report, BlackBerry reported a soaring 70% uptick in unique malware files discovered, which equates to a staggering 5.2 novel malware samples per second.

This illustrates the previous points that our current defensive controls are not able to withstand the onslaught of new AI-assisted attacks being orchestrated. Of the attack samples analyzed, threat actors were observed targeting the finance and healthcare industries most heavily during this cycle.

Organizational Risks to Generative AI

After so much discussion on how AI is being used by attackers to infiltrate our existing defenses, the logical solution is to also use AI to make those defenses stronger. Many organizations are already focused on how to better leverage the advancements in this technology to not only beef up security but to augment and automate processes in their organizations to provide better speed, efficiency, cost savings, and other business and customer benefits.

The advantages of AI are numerous and can help the decision-making process, realize reduction in human errors, create higher quality output, increase innovation, foster service and product improvements, and provide the type of speed and efficiency that can really positively impact productivity and profitability. But innovative technology is never without risk, and where AI is concerned, the risks run the gamut from subjects like accuracy and accountability, intellectual property, and even legal risks.

Understanding how AI is used in your environment helps to proactively identify some of the potential risks involved. Most employees are well-versed in the practice of protecting sensitive data, but what they may not realize, is that utilizing AI to help solve problems often opens them up to potential data leaks. Fortunately, some of the same products and services aimed at monitoring and detecting incoming threats to your network can also be used to detect the inappropriate or unauthorized use of AI technologies, which results in possible sensitive data leaving your organization.

Although AI is not new, most organizations are merely on the threshold of realizing all the ways they can take advantage of the technology’s benefits without introducing unnecessary risk to their organizations. At the very least, this will likely be the basis for many important and sometimes polarizing debates in the years to come, so organizations should begin preparing now if they haven’t already.