An Enterprise Approach to Implementing GenAI

An Enterprise Approach to Implementing GenAI

It’s been almost two years since ChatGPT exploded onto the scene, bringing generative AI into the mainstream. Since then, enterprises across industries have realized tremendous productivity improvements thanks to GenAI’s ability to produce text, images, video, and sound based on large language model (LLM) data sets on which it is trained.

The value proposition of GenAI adoption at scale is undisputed, and it will only get stronger as these LLMs get more extensive and smarter. However, businesses must balance these benefits with serious potential security, compliance, and privacy risks. As organizations integrate these solutions into their workflows, security teams must develop a strategic approach to mitigating these risks.

Some of the risks inherent in GenAI tools

A standard rule applies to all AI, generative or otherwise: garbage in, garbage out. One of the most significant risks is that GenAI tools are often trained on the fly with various sorts of data. 

Sometimes, threat actors train LLMs on malicious scripts or data that can be used to generate malware variations or advanced polymorphic and metamorphic malware, including ransomware. It’s hard to detect new malware variations because no known indicators of compromise (IOCs) and tactics, techniques, and procedures (TTPs) exist for them. This widens the potential hacker landscape to scripts, kiddies, and hacktivists without technical skills and increases the velocity of exploitation of vulnerabilities and zero-days.

Because GenAI tools are trained on publicly available data—and people own IP on it—without proper legal review, enterprises could inadvertently use this data in unauthorized ways. This includes creating text, images, videos, and source code that result in copyright infringement. These tools have been the subject of numerous lawsuits, including by celebrities. 

GenAI tools are not subject to baseline internal security controls like other technologies, and as a result, they are also not subject to penetration testing. This lack of vetting comes with risks, such as these tools getting tricked into disclosing sensitive data, such as IP and PII, and unauthorized internal over-sharing and external disclosure of sensitive data. These tools are also not trained to comply with local, state, federal, and international privacy regulations, which can lead to reputational risk and fines. 

Finally, some GenAI risks happen due to human error, especially as threat actors get more sophisticated with various forms of cyber attacks. Phishing emails and malicious content may sometimes look super believable, leading to employees clicking on dangerous links. People have been tricked into wiring money to threat actors based on AI-generated phone calls for a reason. Sometimes, seeing (or hearing) is believing. 

Three areas of GenAI risk mitigation 

GenAI carries numerous complex risks to security, compliance, and privacy. The good news is that by focusing on the below three key areas, most enterprises can set up suitable systems and processes for reducing them:

  1. Create (and integrate) strong security controls: To separate the malicious use of Gen AI models from the good they can do, enterprises must establish appropriate controls that allow systems to maintain customer and employee data confidentiality and integrity. The controls must meet regulatory muster across the various international and state-based data privacy laws. Enabling controls such as data loss prevention (DLP) can allow you to scrape, block, or obfuscate data if necessary, including at the endpoint, in the network, and in the various other domains you operate. Businesses must integrate these controls across the board into the design and development of LLMs and the continuous assessment of systems. 
  2. Develop IOCs and TTPs: Enterprises must create IOCs and TTPs to detect and block misuse of GenAI tools for cyber attacks, identity theft, and business fraud. Enabling security controls within the LLMs can prevent or block training datasets that may contain malware or malicious scripts. Where these don’t or can’t exist, businesses should configure unsupervised machine learning-enabled behavior-based dynamic detection of malware.
  3. Drive (human) security training and awareness: We can’t blame the machine for all of GenAI’s woes. That’s why regular security awareness training is critical. All employees using GenAI should know how to verify the information coming out of these models, especially if it looks suspicious. For instance, for a risk like phishing, you can rely on training for users to detect, flag, and block these emails. Consider gamification and other modern training techniques that reward instead of penalize employees who might fall victim to GenAI cyber attacks.

GenAI capabilities carry untold benefits for enterprises that use them smartly, safely, and securely. By balancing the risks against the rewards, everyone can get the best value proposition and return on investment from these tools.