AI GOVERNANCE

How to Write an AI Acceptable Use Policy When You Don't Have a Legal Team

Every guide to writing an AI acceptable use policy tells you to “involve your stakeholders: IT, legal, HR, and business units.” If you're a small business owner, you are all of those people. This guide is for you.

Published March 22, 2026 · 9 min read

You know you need an AI policy. Your team is using ChatGPT, Copilot, and a half-dozen other AI tools daily. Maybe a client asked about your AI practices. Maybe your insurance broker mentioned something about new AI exclusions. Maybe you just have a nagging feeling that someone is going to paste client data into ChatGPT and you'll have a problem.

So you Googled “how to write an AI acceptable use policy.” And every result assumes you have departments. IT will handle tool approval. Legal will review the language. HR will distribute and track acknowledgments. Compliance will monitor adherence.

You don't have departments. You have a team of 5–50 people and you need something that works by Friday.

Here's how to write one.

Before You Write: The 30-Minute Prep

Don't start writing the policy yet. Spend 30 minutes on these three things first:

Prep 1: List every AI tool your team uses (10 min)

Walk around (or Slack your team) and ask: “What AI tools do you use for work?” You'll be surprised. Common ones people forget:

  • • Grammarly (yes, it's AI)
  • • Otter.ai or Fireflies (meeting transcription)
  • • Canva's AI features
  • • Google/Microsoft AI features built into email and docs
  • • AI-powered CRM features (HubSpot, Salesforce Einstein)
  • • Personal ChatGPT accounts used for work

Prep 2: Identify your sensitive data categories (10 min)

What data does your business handle that should never go into an AI tool? Think: client names and contact info, financial data, health records, legal documents, trade secrets, employee personal information, passwords/credentials. Write these down — they become your “never input” list.

Prep 3: Check your insurance policy (10 min)

Call your broker or check your policy documents. Ask: “Does my policy include any AI-related endorsements — specifically CG 40 47, CG 40 48, or CG 35 08?” The answer determines how urgent this policy is. If yes, your coverage already has AI carve-outs. Learn about AI insurance exclusions →

The 7 Sections Your AI Policy Needs

Here's what to write, section by section. For each one, I'll show you the enterprise version (what other guides tell you) vs. the small business version (what actually works for you).

Section 1: Purpose & Scope

ENTERPRISE VERSION

“This policy governs the acceptable use of artificial intelligence technologies across all business units, subsidiaries, and third-party contractors...”

YOUR VERSION

“This policy covers how everyone at [Company] uses AI tools for work. It applies to all employees, contractors, and freelancers. If you use AI for anything work-related, this policy applies to you.”

Keep it short. Name your company. Make it clear who it applies to.

Section 2: Approved & Prohibited Tools

ENTERPRISE VERSION

“All AI tools must be vetted by the IT Security team and approved by the Chief Information Security Officer before deployment...”

YOUR VERSION

“Approved tools: [list them]. Any new AI tool must be approved by [your name/role] before use. Do not sign up for or use any AI tool for work without approval.”

List your approved tools by name. Be explicit about the approval process (even if it's “ask me in Slack”).

Section 3: Data Input Rules

This is the most important section. It prevents the “someone pasted client data into ChatGPT” scenario.

YOUR VERSION

Never input into any AI tool:
• Client/customer names, emails, or contact information
• Financial data (account numbers, revenue figures, SSNs)
• Health information (any patient/medical data)
• Legal documents or privileged communications
• Passwords, API keys, or access credentials
• Internal financial projections or strategic plans

OK to input:
• Generic writing prompts without client details
• Public information and general research queries
• Code that doesn't contain credentials or proprietary logic

Customize this list for your business. A healthcare practice adds PHI. A law firm adds case details. A marketing agency adds client brand assets.

Section 4: Human Review Requirements

YOUR VERSION

“All AI-generated content must be reviewed by a human before it is sent to a client, published, or used in any decision that affects a customer. AI output is a draft, never a final product. The employee who submits or publishes the output is responsible for its accuracy.”

This is your liability firewall. It establishes that AI is a tool, not a decision-maker. Underwriters love this language.

Section 5: Disclosure Rules

YOUR VERSION

“When a client or customer asks whether AI was used in creating their deliverable, answer honestly. If AI substantially contributed to a work product, disclose this to the client before delivery. When in doubt, disclose.”

Adjust based on your industry. Law firms may need stronger disclosure. Marketing agencies may need client-by-client policies.

Section 6: Consequences

YOUR VERSION

“Violation of this policy may result in disciplinary action, up to and including termination. If a violation results in a data breach or client harm, the incident will be documented and reported per our incident response procedures.”

You need teeth. A policy without consequences is a suggestion. Underwriters know the difference.

Section 7: Review Schedule

YOUR VERSION

“This policy will be reviewed and updated every 6 months, or whenever a new AI tool is adopted, a significant AI incident occurs, or relevant regulations change. Next review date: [date].”

AI is evolving fast. A policy written today may be outdated in 6 months. Building in a review schedule shows underwriters you treat this as a living document.

Skip the Writing. Generate It in 15 Minutes.

CoverMyAI generates a complete AI acceptable use policy plus four supporting documents — customized to your industry, tools, and team. $29 one-time. No legal fees.

Includes: AI acceptable use policy + tool registry + employee acknowledgments + incident response plan + insurance renewal summary.

The Policy Is Not Enough

Here's what most “how to write an AI policy” guides don't tell you: the policy alone doesn't protect you.

An acceptable use policy is one of five documents that insurers and regulators expect:

AI Acceptable Use Policy — what you're writing now
AI Tool Registry — inventory of every AI tool and its risk level
Employee Acknowledgment Forms — proof your team read the policy
Incident Response Plan — what to do when AI causes a problem
Insurance Renewal Summary — what to hand your broker at renewal

Writing an AI acceptable use policy is a great first step. But without the supporting documents, your insurer may still add AI exclusion endorsements to your policy at renewal.

Common Mistakes to Avoid

Mistake 1: Making it too vague

“Use AI responsibly” is not a policy. Name specific tools, specific data categories, specific consequences. Vague policies are unenforceable — and insurers know it.

Mistake 2: Banning AI entirely

If you ban AI, employees will use it anyway — they'll just hide it. A ban creates more risk than a well-governed policy. Acknowledge reality and set guardrails.

Mistake 3: Writing it and forgetting it

AI tools change monthly. New capabilities, new risks. If your policy doesn't have a review schedule, it's stale before the ink dries. Set a 6-month review cycle.

Mistake 4: Not getting acknowledgments

A policy that employees haven't signed is legally weaker than one they have. Get every employee to sign an acknowledgment form. Digital signatures count.

Mistake 5: Ignoring the insurance connection

Most AI policy guides treat governance and insurance as separate topics. They're not. Your AI policy directly affects whether your insurance covers AI claims. Write your policy with your insurer in mind.

The DIY Timeline

Prep work (tool inventory, data audit, insurance check)30 min
Writing the policy (7 sections)2–4 hours
Creating supporting documents (registry, forms, etc.)4–6 hours
Legal review (optional but recommended)$300–$600
Total DIY8–12 hours + $0–$600
CoverMyAI (all 5 documents)15 min + $29

Your Policy + 4 More Documents. 15 Minutes. $29.

Answer a guided questionnaire about your business. CoverMyAI generates a complete AI governance kit customized to your industry, tools, and team — including the acceptable use policy you came here to write.

About CoverMyAI: We help small businesses protect their insurance coverage in the age of AI. Our tools map your AI usage to real underwriting criteria so you can govern AI with confidence — not guesswork. More articles →