While Artificial Intelligence has existed since its first program, Logic Theorist, was developed in 1956, AI applications truly went “mainstream” in 2022 with the release of ChatGPT, OpenAI’s chatbot that introduced everyday users to the ingenuity and benefits of generative AI. With the barriers to entry for technology providers effectively lowered, the impact has been widespread. Today, there are countless AI-powered solutions claiming they can help you streamline and optimize processes within your information security function. However, AI is a double-edged sword; for every potential benefit it can provide to your business, it also brings new risks.
The Benefits and Risks of AI for InfoSec Teams
There are several areas where businesses are successfully leveraging AI and ML in their security and compliance programs. Specifically, generative AI solutions and ML models have posed the most promise in driving efficiencies and optimizations for information security teams. We describe these in more detail below:
Generative AI solutions, where the AI can make content completion suggestions when a user is typing in the interface, are being used to automate activities involving writing. The following are examples of areas where GenAI can be helpful in InfoSec programs:
- Compliance activities, such as mapping frameworks and writing control requirements.
- Assurance activities, such as writing and mapping questions, controls, and evidence.po
- Risk management activities, such as mapping risks, threats, and controls.
- Governance activities, such as narrative and report writing.
- Issue/exception creation activities used across GRC functions
Machine Learning models, which use algorithms trained on available data to emulate logical decision-making, are being used for activities involving intrusion detection and malicious behavior. The following are examples of areas where these models can be helpful in InfoSec programs:
- Email, spam, and phishing monitoring
- Detections engineering
- Endpoint-based malware detection
- Conditional-based access policies
- Code analysis/suggestions
- Suggesting mapping between controls and framework requirements
- Uncovering duplicate issues in your GRC environment
- Audit or assessment evidence re-use
- Anomaly detection in controls testing
There is increasing evidence that leveraging AI and ML solutions in these capacities may yield positive results for InfoSec teams. AI has been widely referred to as a “force multiplier” and “technology catalyst” with the potential to help information security teams address challenges like rising cyberattacks, new and evolving compliance requirements, persistent skills gaps, and talent retention difficulties. A recent McKinsey and Company report identified 63 generative AI use cases spanning 16 business functions predicted to generate $2.6 trillion to $4.4 trillion in annual operational benefits across industries (notably, software engineering for corporate IT was one of four functions identified where AI would drive the most value).
Conversely, AI’s biggest threats to the business involve compromised intellectual property (see recent high-profile examples of data breaches here, here, and here) and false positives and negatives that can impact business operations (see recent
Therefore, today’s InfoSec teams must approach AI and ML responsibly when using them to drive efficiencies and optimizations in their programs without exposing their business to additional risks.
Third-Party Risk: Information Leakage
An important caveat is that most companies are not planning to build their own AI models. Businesses are more likely to leverage free or commercial licenses for AI solutions like ChatGPT or procure products with AI as an embedded feature. In many cases, businesses may inadvertently provide the AI provider with proprietary and sensitive information, effectively exposing your business to any risk that your vendor may leak, misuse, or abuse your data.
Weighing the Benefits Against the Risks
If you are a CISO reading this, you are likely bombarded with vendors trying to sell you AI-powered solutions on a regular basis. This begs the question: how can InfoSec leaders responsibly evaluate these options? On the one hand, CISOs cannot afford to reject AI completely; there are far too many obvious benefits that AI can bring to your team and function.
On the other hand, there are risks that come with adopting any technology solution: loss or disclosure of your business’s intellectual property, not meeting your ROI within the anticipated time, and the impacts of false positives and false negatives. In short, purchasing an AI solution always involves tradeoffs. To decide whether this tradeoff makes sense for your business, you must calculate all the potential benefits, risks, and “what can go wrongs” of integrating an AI solution into your security processes.
Only after performing such a risk assessment can the InfoSec team make an informed decision — to purchase or not to purchase — based on their business’s risk appetite.
Download your copy of Harnessing AI and ML to Optimize Your Security Compliance Program: Balancing Risks and Benefits to explore how businesses are successfully using generative AI (GenAI) and machine learning (ML) to augment the capabilities of their teams, as well as best practices for evaluating potential vendors to mitigate their risks.