Infosec Compliance Now | Virtual | February 25, 2026 | 4 CPE Credits Register Now

Customers
Login
Auditboard's logo

February 12, 2026 8 min read

An executive’s guide to the risks of Large Language Models (LLMs)

Guru Sethupathy

Guru Sethupathy

AI has the potential to deliver substantial value to businesses, driving both innovation and operational efficiency. However, before integrating this technology into your enterprise, organizations must understand the potential risks. Large language models (LLMs) are a recently popularized form of generative AI (GenAI) that generates text in response to user prompts. However, like any advanced technology, LLMs come with inherent risks. The primary risks of using LLMs include hallucinations (generating factually incorrect or fabricated content), bias, data privacy/leakage, toxicity, copyright infringement, and new security vulnerabilities. Let’s explore each risk and how best to mitigate it.

AI hallucinations

LLMs aren’t designed to be fact retrieval engines — they work by predicting the probability of the next word in a sequence. As a result, LLMs may produce outputs that are factually incorrect, nonsensical, or entirely fabricated. Builders and developers can reduce the rate of hallucinations in their LLMs by using specific techniques, but no one has been able to eliminate hallucinations. But organizations can mitigate risk by:

  • Fact-checking outcomes and, wherever possible, asking LLMs for citations.
  • Recognizing that LLMs are more likely to hallucinate in specific contexts, including running numerical calculations, applying logic, and engaging in more complex reasoning. LLMs are also prone to making up information when they lack relevant data.
  • Leveraging techniques such as chain-of-thought prompting to reduce error rates.
  • Read from more targeted datasets and apply additional back-end prompting logic to reduce the likelihood of hallucinations when building LLM products.

Model bias

LLMs are trained on large amounts of text, often scraped from the Internet. This data contains bias that LLMs can learn and propagate. As a result, LLMs can give biased or disparaging responses of lower quality for specific subgroups. You can reduce these risks by:

  • Instructing the LLM, via its system prompt, to be unbiased and not to discriminate.
  • Limiting the scope of your LLM’s outputs to on-topic subjects.
  • Ensuring the data is representative of the target population and adjusting samples as needed when training or providing input to an LLM.
  • Conducting safety checks before deploying any LLMs, and continue to monitor responses to different questions for the LLMs that you choose to adopt.

AI privacy concerns

LLMs can leak or inadvertently disclose personally identifiable information (PII) or other sensitive or confidential details when they are included in an LLM’s original training dataset or entered by the user when they ask a question or prompt the LLM. Avoid these risks by:

  • Proceeding with caution before entering private information into a generative AI tool, and discourage other users from doing the same. When you’re unsure which of your prompt content the LLM will save, request disclosure or privacy context.
  • Curating your datasets, removing any sensitive information you don’t want shared, and conducting safety checks before deployment when building an LLM product.
  • Consider using additional prompting or data-scanning techniques as a second-line check to prevent the inclusion of private data that an LLM might otherwise reveal.

Toxic, harmful, or inappropriate content

LLMs are capable of creating toxic, harmful, violent, obscene, and otherwise inappropriate content because they scrape content from various data sources across the internet. While in a direct search, a user might discount or avoid content from specific sources, LLMs do not necessarily provide their sources upfront. Avoid these concerns by:

  • Taking LLM advice or recommendations with a grain of salt; when you spot unusual responses, request citations and evaluate the quality of the data source.
  • Curating and filtering out toxic or inappropriate content from your training and fine-tuning data, and evaluating the quality of your data sources when building an LLM.
  • Applying output filtering guardrails to prevent them from sharing inappropriate content, and conducting regular safety checks using user logs when deploying LLMs.
  • Continuously monitoring the LLM after deployment to spot and address issues quickly.

LLMs are often trained on copyrighted data and can generate copyrighted material. They can also leverage online materials, such as a person’s tone or voice, to create content that is highly similar to what that person might have generated, in ways that can ultimately be very difficult to differentiate. Reduce these risks by:

  • Checking for watermarks or other indicators that the content was generated by an LLM rather than a human.
  • Curating and filtering toxic or copyrighted content from your training and fine-tuning data, and prompting the LLM to limit its outputs when building or deploying your own LLM.

Security vulnerabilities

LLMs have increased the surface area for security risk, and should be viewed as a core concern for LLM adoption. OWASP (the Open Web Application Security Project) has identified 10 distinct security risks for LLMs that can drive new threats for organizations adopting GenAI. Avoid these threats by:

  • Being judicious about the source of your datasets and whether the author(s) of that data are trustworthy.
  • Curating and filtering suspicious content from your training and fine-tuning data — prompt the LLM to limit its outputs.
  • Using red-teaming strategies and toolkits to probe your model for vulnerabilities; this is similar to how cybersecurity teams scan software for vulnerabilities.
  • Being cautious about LLM agents’ capabilities and applying appropriate safeguards and input sanitization.

Identifying and managing LLM risks for your use cases

The risk factors most relevant for a specific LLM will depend on the data that you use, decisions that you make about your algorithms and processing steps, and your approach to implementation. It is critical — before you build or deploy a model — to identify and mitigate the risks that could harm your organization or your users.

About the authors

Guru Sethupathy

Guru Sethupathy is the VP of AI Governance at AuditBoard. Previously, he was the founder and CEO of FairNow (now part of AuditBoard), a governance platform that simplifies AI governance through automation and intelligent and precise compliance guidance, helping customers manage risks and build trust and adoption in their AI investments. Prior to founding FairNow, Guru served as an SVP at Capital One, where he led teams in building AI technologies and solutions while managing risk and governance.


You may also like to read

featured image
InfoSec

What is a model card report? Your guide to responsible AI

LEARN MORE
featured image
InfoSec

GRC survival guide: Thriving in the era of AI SaaS

LEARN MORE
featured image
InfoSec

AI usage policy: Defining acceptable AI use by employees

LEARN MORE

Discover why industry leaders choose AuditBoard

SCHEDULE A DEMO
upward trending chart
confident business professional