Infosec Compliance Now | Virtual | February 25, 2026 | 4 CPE Credits Register Now

Customers
Login
Auditboard's logo

February 5, 2026 16 min read

AI usage policy: Defining acceptable AI use by employees

Guru Sethupathy

Guru Sethupathy

The majority of technology workers use AI at work, according to a recent Gallup Workforce survey of more than 22,000 individuals. Workers use AI primarily to save time, increase productivity, and refocus on the most important and enjoyable work tasks.

While employee-originated AI use cases can be high-value, there are real risks to adopting new AI capabilities in the workplace. These risks can be performance-related, reputational, or legal/regulatory. Protecting your business from these risks requires stringent policies and procedures governing employees' use of AI.

survey

Unvetted AI adoption creates risks for organizations

Not all AI is created equally. And AI that isn’t properly vetted or sanctioned can pose serious risks to businesses. Examples of unsafe or risky AI usage behaviors can include:

  • Data Privacy: An employee enters confidential company information into a GenAI tool, unaware that the foundation model provider now has access to it to train its models.
  • Poor Performance: A legal associate asks GenAI to cite prior court cases and uses them in a brief without validating their accuracy. The associate didn’t realize that GenAI had fabricated the citations, called hallucinations.
  • Bias Issues: A manager uses AI to draft a performance review for their associate. The AI tool generates results that contain bias, exposing the company to legal risk if an employee sues for discrimination.
  • Regulatory Risk: An employee uses GenAI in a way that is prohibited by key laws like the EU AI Act, and opens the company to significant fines (which, for the EU AI Act, can include up to 7% of a company’s annual revenue).
  • Security Breach: A tech team adopts an AI agent to automate a series of processes, unaware that the agent has created backdoor access to their internal systems.
Grab your free AI employee usage policy template
Get copy

What is an acceptable AI use policy?

An AI usage policy is a document that helps employees understand AI use cases permitted within the organization. It defines which uses are acceptable and outlines the process for an employee to have their specific use cases reviewed for approval. There are multiple benefits to creating an AI usage policy:

  • Clarity: AI use policies ensure employees have a clear understanding of where and how they can adopt their own GenAI tools in the workplace and for business purposes.
  • Reduced risk: When an AI use policy outlines which tools can be adopted (and under what conditions), employees are less likely to violate terms or use AI in unsafe ways. Employees have clear guardrails for acceptable use.
  • Increased AI innovation: Increased clarity about acceptable and unacceptable uses enables employees to feel confident about safe experimentation and to lean into adoption they might otherwise have avoided out of concern for corporate policy.

The primary intent of an AI usage policy is to create transparency for the entire organization. When written well, AI usage policies help employees feel comfortable adopting AI safely while reducing uncertainty and risk.

A policy template for acceptable employee AI use

A comprehensive AI usage policy contains the following five sections:

  1. The organization’s risk appetite and AI usage principles: The company should also articulate its AI usage principles to serve as a north star for employees. Examples might include a focus on data privacy, transparency, fairness, and ethical considerations.
  2. The set of acceptable uses of AI within the organization: Organizations should outline which tools employees may use for work-related activities. An organization’s AI or technology group is responsible for reviewing which technology and capabilities get approved for internal use. In many cases, an organization will sign specific contracts with AI providers to ensure internal data is secure and not used for further model training or sale.
  3. The AI technologies employees can leverage: This is a core section of any AI usage policy that defines which uses are always allowed, always prohibited, and those permitted with further guidance and review. Organizations should establish guardrails for AI use and provide examples to ensure employees understand what they mean. Companies should align their acceptable AI usage principles with their risk appetite and core industry/function.
  4. The process for requesting review of an AI use case: Many new uses for GenAI will not be clear-cut; they may offer significant benefits to the business but could also pose meaningful risks. Organizations should establish a transparent process for employees to suggest new use cases for further review. These reviews often involve central intake and evaluation by one or more experts on AI risks/benefits.
  5. AI usage monitoring for risk and compliance: Once your organization approves AI usage, it may require ongoing testing and review of outcomes to ensure it remains appropriate over time. The AI usage and risk monitoring section of an organization’s AI usage policy should outline how organizations expect employees to engage with and ensure safe use of the AI on an ongoing basis.

Section 1: Risk appetite and AI usage principles

In this section, the organization should clarify its stance on AI in the workplace.

Example text for two different companies with different risk tolerances might include:

  • “As an innovation-first organization, we thrive on the adoption of new technologies that benefit our customers, our employees, and the business as a whole. We strongly encourage our employees to test new AI technologies and surface novel use cases for the benefit of the organization.”
  • As a company, we believe in the power of technology to drive positive change. We also put customer safety at the forefront and are committed to avoiding unnecessary risks with unproven technology. This means that we will thoroughly review all potential AI use cases and vet them before adoption.”

The company should also articulate its AI usage principles, which will serve as a north star for employees. Examples might include a focus on data privacy, transparency, fairness, and ethical considerations.

Section 2: AI technologies employees can use

In this section of the AI usage policy, the organization should outline which tools employees may use for work-related activities.

An organization’s AI or technology group is responsible for reviewing which technology and capabilities employees can use. Their review should include an assessment of:

  • Foundation models like ChatGPT, Gemini, and Claude
  • Internal instances of AI created by fine-tuning open source models
  • Specific vendor AI embedded in an organization’s technology stack that can perform tasks like summarizing meeting notes, reviewing emails, and synthesizing messages on internal platforms (e.g., Zoom, Slack, Gmail, etc.)

In many cases, an organization will sign specific contracts with AI providers to ensure internal data is secure and not used for further model training or sale. Organizations are likely to have preferred providers with contracts in place.

Prohibiting AI use for business on personal devices: Organizations typically prohibit employees from using AI for business on personal devices such as laptops and cell phones. Organizations must outline those requirements.

Section 3: Acceptable AI uses

This is a core section of any AI usage policy. Section 3 defines which uses are always allowed, which are always prohibited, and which may be allowed with further guidance and review.

In this section, organizations should establish guardrails for AI use and provide examples to ensure employees understand what they entail. Below are example guardrails to consider:

Acceptable AI uses:

  • Doesn’t involve the use of sensitive company or personal data
  • Has a limited impact (e.g., may synthesize meeting notes rather than making key business decisions)
  • Doesn’t incorporate demographic data or proxies for users, which can create a risk of bias

Prohibited AI uses:

  • The model is trained on or uses protected data without consent
  • Usage is likely to elevate the risk of data leakage (e.g., entering sensitive data into a public interface)
  • Usage tied to an unacceptable risk is explicitly outlawed by key regulations like the EU AI Act or considered inappropriate based on internal ethical guidelines

AI uses that may be acceptable but require further review:

  • AI that could influence material decisions for employees or customers (e.g., hiring)
  • AI uses that could be externally-facing to customers
  • AI that includes any use of the company's confidential or personal data

Organizations should align their acceptable AI usage principles with their risk appetite and core industry/function. Common use cases for an organization — e.g., using AI to summarize meeting notes — should be called out directly.

Section 4: Process for requesting use case review

While there will be some clear approved or prohibited use cases that employees will encounter, many new uses for GenAI will not be clear-cut: they may offer significant benefit to the business, but could also pose meaningful risks.

Organizations should establish a clear process for employees to suggest new use cases for further review. These reviews often involve central intake and evaluation by one or more experts on AI risks/benefits, and will include a committee of:

  • Business leader(s): provide business context
  • Data privacy officers or specialists: evaluate the risk of data leakage
  • Legal, regulatory, or compliance leaders: ensure scope and compliance with key regulations
  • Technology or cyber experts: evaluate cybersecurity risks and vulnerabilities

An organization’s AI usage policy will include details about how an employee can submit their use case, an initial intake form with relevant details, and a view of next steps in the review process and expectations.

In our experience, once an organization sets up this review process, it can receive tens to hundreds of potential AI use cases from employees over the following weeks and months.

The process of collecting relevant information, triaging risk and compliance views, and tracking approved and prohibited AI use can become overwhelming if it isn’t streamlined. To help keep track of AI use, many organizations turn to AI governance platforms to stay on top of varied use cases and risks.

Section 5: AI usage and risk monitoring

Reviewing and approving an AI use case is usually not the last component of safe AI adoption — especially for AI that has high value but higher risk. Often, AI use comes with conditions that may involve ongoing monitoring or review of outcomes.

For example, a high-risk AI use case in HR might involve an AI tool that helps recruiters rank-order potential candidates for a role. This can save recruiters and hiring managers significant time and help them achieve better outcomes — but it is critical to review a technology like this for performance and bias on an ongoing basis. Once AI usage has been approved, the organization may require ongoing testing and review of outcomes to ensure it remains appropriate over time.

The AI usage and risk monitoring section of an organization’s AI usage policy should outline how employees should engage with and ensure safe use of the AI on an ongoing basis.

acceptable use policy

Download AuditBoard’s employee AI usage policy template here and learn how to:

  • Establish clear AI usage guidelines for employees
  • Reduce risk from unsanctioned or unsafe AI tools
  • Accelerate the adoption of safe, innovative AI without slowing teams down
  • Easily customize for GDPR, NYC LL144, ISO 42001, and other compliance needs

Monitor and flag AI misuse

Misuse of AI can create significant risks for organizations, employees, and customers. This makes it critical for organizations to track AI adoption and ensure employees engage with the new technology safely. Organizations can monitor for AI misuse in two core ways:

  • Blocking employee access to unsanctioned vendor AI tools
  • Monitoring prompt logs for sanctioned tools to ensure employees aren’t engaging in any explicit misuse (e.g., use of AI in unethical, prohibited, or illegal contexts)

Typically, AI usage policies link to an organization’s code of conduct, and companies may monitor compliance in the same way they monitor other violations of their internal guidelines.

How an AI governance platform can help

Reviewing employee adoption of AI across an organization can become a substantial effort.

Employees — focused on innovation, productivity, and elevating business outcomes — can generate varied and creative uses for AI. But evaluation and monitoring of high-risk, high-value use cases can quickly become taxing.

AI governance platforms like AuditBoard can reduce the burden of reducing AI risk and staying compliant by:

  • Tracking AI adoption across the organization
  • Centralizing AI risk assessments and the review process
  • Flagging risks and compliance concerns for proposed use cases
  • Operationalizing AI governance approval workflows
  • Automating and streamlining the monitoring of existing AI tools
AI governance with AuditBoard
Learn more

About the authors

Guru Sethupathy

Guru Sethupathy is the VP of AI Governance at AuditBoard. Previously, he was the founder and CEO of FairNow (now part of AuditBoard), a governance platform that simplifies AI governance through automation and intelligent and precise compliance guidance, helping customers manage risks and build trust and adoption in their AI investments. Prior to founding FairNow, Guru served as an SVP at Capital One, where he led teams in building AI technologies and solutions while managing risk and governance.


You may also like to read

featured image
InfoSec

What is the Colorado AI Act? A detailed guide to SB 205

LEARN MORE
featured image
InfoSec

What is the NIST AI Risk Management Framework?

LEARN MORE

Discover why industry leaders choose AuditBoard

SCHEDULE A DEMO
upward trending chart
confident business professional