Infosec Compliance Now | Virtual | February 25, 2026 | 4 CPE Credits Register Now

Customers
Login
Auditboard's logo

February 5, 2026 8 min read

AI usage policy: Defining acceptable AI use by employees

Guru Sethupathy

Guru Sethupathy

The majority of technology workers use AI at work, according to a recent Gallup Workforce survey of more than 22,000 individuals. Workers use AI primarily to save time, increase productivity, and refocus on the most important and enjoyable work tasks.

While employee-originated AI use cases can be high-value, there are real risks to adopting new AI capabilities in the workplace. These risks can be performance-related, reputational, or legal/regulatory. Protecting your business from these risks requires stringent policies and procedures governing employees' use of AI.

survey

Unvetted AI adoption creates risks for organizations

Not all AI is created equally. And AI that isn’t properly vetted or sanctioned can pose serious risks to businesses. Examples of unsafe or risky AI usage behaviors can include:

  • Data Privacy: An employee enters confidential company information into a GenAI tool, unaware that the foundation model provider now has access to it to train its models.
  • Poor Performance: A legal associate asks GenAI to cite prior court cases and uses them in a brief without validating their accuracy. The associate didn’t realize that GenAI had fabricated the citations, called hallucinations.
  • Bias Issues: A manager uses AI to draft a performance review for their associate. The AI tool generates results that contain bias, exposing the company to legal risk if an employee sues for discrimination.
  • Regulatory Risk: An employee uses GenAI in a way that is prohibited by key laws like the EU AI Act, and opens the company to significant fines (which, for the EU AI Act, can include up to 7% of a company’s annual revenue).
  • Security Breach: A tech team adopts an AI agent to automate a series of processes, unaware that the agent has created backdoor access to their internal systems.

What is an acceptable AI use policy?

An AI usage policy is a document that helps employees understand which uses of AI are permitted within the organization. It defines which uses are acceptable and outlines the process for an employee to have their specific use cases reviewed for approval. There are multiple benefits to creating an AI usage policy:

  • Clarity: AI use policies ensure employees have a clear understanding of where and how they can adopt their own GenAI tools in the workplace and for business purposes.
  • Reduced risk: When an AI use policy outlines which tools can be adopted (and under what conditions), employees are less likely to violate terms or use AI in unsafe ways. Employees have clear guardrails for acceptable use.
  • Increased AI innovation: Increased clarity about acceptable and unacceptable uses enables employees to feel confident about safe experimentation and to lean into adoption they might otherwise have avoided out of concern for corporate policy.

The primary intent of an AI usage policy is to create transparency for the entire organization. When written well, AI usage policies help employees feel comfortable adopting AI safely while reducing uncertainty and risk.

A policy template for acceptable employee AI use

A comprehensive AI usage policy contains the following five sections:

  1. The organization’s risk appetite and AI usage principles: The company should also articulate its AI usage principles to serve as a north star for employees. Examples might include a focus on data privacy, transparency, fairness, and ethical considerations.
  2. The set of acceptable uses of AI within the organization: Organizations should outline which tools employees may use for work-related activities. An organization’s AI or technology group is responsible for reviewing which technology and capabilities get approved for internal use. In many cases, an organization will sign specific contracts with AI providers to ensure internal data is secure and not used for further model training or sale.
  3. The AI technologies employees can leverage: This is a core section of any AI usage policy that defines which uses are always allowed, always prohibited, and those that are permitted with further guidance and review. Organizations should establish guardrails for AI use and provide examples to ensure employees understand what they mean. Companies should align their acceptable AI usage principles with their risk appetite and core industry/function.
  4. The process for requesting review of an AI use case: Many new uses for GenAI will not be clear-cut; they may offer significant benefits to the business but could also pose meaningful risks. Organizations should establish a transparent process for employees to suggest new use cases for further review. These reviews often involve central intake and evaluation by one or more experts on AI risks/benefits.
  5. AI usage monitoring for risk and compliance: Once your organization approves AI usage, it may require ongoing testing and review of outcomes to ensure it remains appropriate over time. The AI usage and risk monitoring section of an organization’s AI usage policy should outline how organizations expect employees to engage with and ensure safe use of the AI on an ongoing basis.
acceptable use policy

Download AuditBoard’s employee AI usage policy template here and learn how to:

  • Establish clear AI usage guidelines for employees
  • Reduce risk from unsanctioned or unsafe AI tools
  • Accelerate the adoption of safe, innovative AI without slowing teams down
  • Easily customize for GDPR, NYC LL144, ISO 42001, and other compliance needs

Monitor and flag AI misuse

Misuse of AI can create significant risks for organizations, employees, and customers. This makes it critical for organizations to track AI adoption and ensure employees engage with the new technology safely. Organizations can monitor for AI misuse in two core ways:

  • Blocking employee access to unsanctioned vendor AI tools
  • Monitoring prompt logs for sanctioned tools to ensure employees aren’t engaging in any explicit misuse (e.g., use of AI in unethical, prohibited, or illegal contexts)

Typically, AI usage policies link to an organization’s code of conduct, and companies may monitor compliance in the same way they monitor other violations of their internal guidelines.

Drive responsible innovation with AI governance

  • Easily tracking and reviewing new AI applications and use cases to build a complete AI model inventory
  • Meeting evolving AI standards while connecting AI models to relevant AI policies, risks, and controls to strengthen your security posture
  • Gain visibility into evolving AI risk and data integrity by establishing and maintaining adequate AI controls.
AI governance with AuditBoard
Learn more here

About the authors

Guru Sethupathy

Guru Sethupathy is the VP of AI Governance at AuditBoard. Previously, he was the founder and CEO of FairNow (now part of AuditBoard), a governance platform that simplifies AI governance through automation and intelligent and precise compliance guidance, helping customers manage risks and build trust and adoption in their AI investments. Prior to founding FairNow, Guru served as an SVP at Capital One, where he led teams in building AI technologies and solutions while managing risk and governance.


You may also like to read

featured image
InfoSec

How Navan built a connected risk strategy with AuditBoard AI

LEARN MORE
featured image
InfoSec

What to look for in modern IT risk management software

LEARN MORE
featured image
InfoSec

Beyond the compliance checklist: Risk-driven cyber GRC

LEARN MORE

Discover why industry leaders choose AuditBoard

SCHEDULE A DEMO
upward trending chart
confident business professional