AI VENDOR RISK

AI Vendor Risk Assessment for Small Business: A Simple Framework That Doesn't Require a Legal Team

Enterprise vendor assessments run 200+ questions and take weeks. Here's a practical 10-question framework any small business owner can complete in an afternoon — and it doubles as the documentation your insurer is starting to ask for.

Published March 23, 2026 · 11 min read

Your business uses ChatGPT, Copilot, Midjourney, maybe a handful of industry-specific AI tools. Each one touches your data, your clients' data, or both. Each one has different terms of service, different data retention policies, and different security postures.

Do you know which ones train on your inputs? Which ones store your data in the US versus overseas? Which ones would notify you of a breach?

If the answer is “not exactly,” you're in good company. Most small businesses adopt AI tools the same way they adopt any software — someone on the team starts using it, it works, and it becomes part of the workflow. Nobody evaluates the vendor.

That was fine when these tools were just productivity helpers. But now that insurers are adding AI exclusions to your policies, the lack of vendor due diligence creates real financial exposure.

Why this matters for your insurance

Insurance underwriters are beginning to ask: “What AI tools does your business use, and what due diligence have you performed?” Businesses that can show a documented vendor assessment are in a stronger position at renewal. Those that can't may face Verisk CG 40 47 exclusion endorsements that remove AI-related claims from coverage entirely.

Why Enterprise Frameworks Don't Work for You

If you search for “AI vendor risk assessment,” you'll find frameworks from FS-ISAC, NIST, and Big Four firms. They're thorough. They're also designed for companies with:

  • Dedicated vendor management teams
  • Legal departments that review every SaaS contract
  • GRC (Governance, Risk, and Compliance) platforms
  • Budgets for third-party security audits

You have none of those things. What you need is a framework that covers the risks that actually matter for a small business, takes less than a day to complete, and produces documentation your insurer will accept.

Step 1: Build Your AI Tool Inventory

Before you can assess vendors, you need to know what tools your team actually uses. This is harder than it sounds — shadow AI adoption is the norm, not the exception.

Send a simple survey to your team: “List every AI tool you use for work, including free ones, browser extensions, and tools you use occasionally.” You'll be surprised what comes back.

Common AI tools small businesses miss

• ChatGPT (including free accounts)
• Microsoft Copilot (bundled in Office)
• Google Gemini (built into Workspace)
• Grammarly (AI writing assistant)
• Notion AI (document generation)
• Canva Magic Studio (image generation)
• GitHub Copilot (code generation)
• Otter.ai (meeting transcription)
• Jasper, Copy.ai (marketing content)
• Midjourney, DALL-E (image generation)
• Zoom AI Companion (meeting summaries)
• HubSpot AI, Salesforce Einstein

For each tool, record: the vendor name, what your team uses it for, what type of data goes into it, who on your team uses it, and whether it's a paid or free account. This inventory alone puts you ahead of 90% of small businesses and is the first document in a proper AI risk management framework.

Step 2: The 10-Question Vendor Assessment

For each AI tool in your inventory, answer these 10 questions. Skip the 200-question enterprise questionnaires — these 10 cover the risks that actually matter for small businesses and map directly to what insurers evaluate.

1

Does the vendor use your inputs to train their AI models?

Why it matters: If yes, your client data, proprietary processes, and business secrets could end up in the model and surface in other users' outputs. ChatGPT's free tier trains on inputs by default. Enterprise plans typically don't.

Green flag: “We do not use customer data for model training” with contractual commitment.

Red flag: Vague language like “we may use data to improve our services.”

2

Where is your data stored and processed?

Why it matters: Data residency affects regulatory compliance (HIPAA, state privacy laws) and can create issues with cyber insurance policies that require US-based data processing.

Green flag: Clear documentation of data center locations with SOC 2 certification.

Red flag: No information available, or data processed in jurisdictions with weak privacy laws.

3

How long does the vendor retain your data?

Why it matters: Long retention periods increase your exposure window. If the vendor gets breached two years from now and still has your data, you're affected. Shorter retention = smaller blast radius.

Green flag: 30-day or less retention with option to delete on demand.

Red flag: “We retain data as needed for service delivery” with no defined period.

4

What happens to your data if you cancel?

Why it matters: Can you export your data? Is it deleted automatically? Some vendors retain data indefinitely after account closure. This matters for client confidentiality and regulatory compliance.

Green flag: Documented data export and deletion process within 30 days of cancellation.

Red flag: No cancellation data policy or “data may be retained for legal purposes.”

5

Does the vendor have a breach notification policy?

Why it matters: If the vendor is breached and your data is exposed, you need to know immediately to trigger your own incident response and notify affected clients. Most state breach notification laws have tight timelines (often 30-60 days).

Green flag: Committed to notifying within 72 hours with details of affected data.

Red flag: No breach notification clause in terms, or “reasonable timeframe” language.

6

What security certifications does the vendor hold?

Why it matters: SOC 2, ISO 27001, and HIPAA certifications signal that the vendor takes security seriously. More importantly, your insurer may ask about the security posture of your AI vendors during underwriting.

Green flag: Current SOC 2 Type II report available upon request.

Red flag: No security certifications, or certifications that expired and weren't renewed.

7

Who is liable if the AI output causes harm?

Why it matters: Almost every AI vendor disclaims liability for output accuracy. That means if ChatGPT gives your employee bad legal advice that you pass to a client, you bear the liability — not OpenAI. Know this going in.

Green flag: Clear terms about output limitations with indemnification for vendor-side failures.

Red flag: Blanket “use at your own risk” with zero vendor liability.

8

Does the vendor offer an enterprise/business plan with better data protections?

Why it matters: Many AI tools have dramatically different data practices between free and paid tiers. ChatGPT Team ($25/user/mo) doesn't train on your data. The free tier does. The upgrade cost may be trivial compared to the risk reduction.

Green flag: Business tier with contractual data protection and BAA (for healthcare).

Red flag: No business tier — every customer gets the same (consumer-grade) data handling.

9

Can the vendor's AI output be explained or audited?

Why it matters: If an AI-generated recommendation leads to a dispute, can you trace how the output was produced? This matters for professional liability, E&O claims, and regulatory inquiries. “The AI told us to” is not a defense.

Green flag: Audit logs, conversation history export, and output provenance tracking.

Red flag: No logging, no export, outputs disappear after the session.

10

Does the vendor's AI supply chain introduce additional risk?

Why it matters: Many AI tools don't build their own models — they call OpenAI, Anthropic, or Google under the hood. Your data may pass through multiple vendors. A “secure” app built on an insecure foundation is still insecure.

Green flag: Transparent documentation of sub-processors and AI model providers.

Red flag: “Proprietary AI” with no disclosure of underlying models or providers.

Step 3: Score and Tier Your Vendors

Once you've answered the 10 questions for each tool, assign a risk tier. Don't overthink this — you're not building a risk matrix. You're creating a simple classification:

Risk TierCriteriaAction RequiredReview Frequency
LOWNo client data, no sensitive info, good vendor policiesDocument in inventory, basic usage guidelinesAnnually
MEDIUMSome business data, adequate vendor policies, paid tierFull 10-question assessment, usage policy, data input limitsEvery 6 months
HIGHClient PII, financial data, health info, or decision-making outputsFull assessment + upgrade to business tier + employee training + incident response planQuarterly

Example: Grammarly used for proofreading marketing emails? Low risk. ChatGPT used to analyze client contracts? High risk. The tier determines how much governance you wrap around the tool.

Step 4: Create Your Vendor Assessment Record

For each medium- and high-risk vendor, create a simple one-page record. This is the document your insurer wants to see. It doesn't need to be fancy — it needs to exist.

Vendor Assessment Record Template

Vendor: [Name]

Tool/Service: [Specific product used]

Plan tier: [Free / Business / Enterprise]

Assessment date: [Date]

Assessed by: [Name]

Data types processed: [List]

Risk tier: [Low / Medium / High]

Key findings: [2-3 bullet points from 10 questions]

Approved for use: [Yes / Yes with restrictions / No]

Restrictions: [Any data input limits or usage rules]

Next review date: [Based on tier]

Keep these records in the same place as your AI acceptable use policy and employee policy template. Together, they form your AI governance documentation.

Step 5: Set Rules Based on Risk Tier

The assessment is only useful if it changes behavior. For each risk tier, establish clear rules your team can follow:

Low-Risk Tool Rules

  • OK to use for general productivity
  • No client names, financial data, or PII in inputs
  • Team member self-manages within AUP guidelines

Medium-Risk Tool Rules

  • Must be on a paid/business tier
  • Specific data input restrictions (e.g., “no client names, use Project Alpha instead”)
  • Outputs must be reviewed by a human before use
  • Usage logged or documented

High-Risk Tool Rules

  • Enterprise tier required
  • Must have signed DPA (Data Processing Agreement) or BAA
  • Restricted to trained employees with documented acknowledgment
  • All outputs reviewed and approved before external use
  • Included in incident response plan

Real Example: Assessing ChatGPT for a 20-Person Accounting Firm

Vendor: OpenAI — ChatGPT

Current plan: Free (employees using personal accounts)

Data going in: Client tax summaries, financial statements, engagement letters

Risk tier: HIGH

Assessment findings:

  • Free tier trains on inputs — client financial data at risk of leaking into model
  • No BAA available (even on Team tier), problematic if any clients are healthcare-adjacent
  • Employees using personal accounts — no centralized control or audit trail
  • OpenAI disclaims all liability for output accuracy

Decision: Upgrade to ChatGPT Team ($25/user/mo). Implement data input policy: no client names, no SSNs, no raw financial data. Use anonymized summaries only. All outputs reviewed before client delivery.

Annual cost: $6,000 (20 users × $25 × 12 months)

Cost of one data breach: $120,000–$500,000+ (legal, notification, regulatory fines, lost clients)

How This Connects to Insurance

Here's the part that makes vendor assessment worth the effort beyond the obvious risk reduction:

1

Underwriting questions are coming

Carriers are adding AI-specific questions to renewal applications. Having completed vendor assessments gives you ready answers instead of scrambling at renewal time.

2

Exclusion endorsements target undocumented AI usage

The Verisk CG 40 47 endorsement can exclude all AI-related claims. Demonstrating vendor due diligence strengthens your case for keeping full coverage.

3

Broker conversations become easier

Instead of your broker guessing your AI exposure, you hand them a vendor inventory with risk tiers. They can advocate for better terms because they have documentation to show the carrier.

Get your full AI governance documentation in 15 minutes

Don't want to build these documents from scratch? Our AI Governance Kit generates a complete vendor registry, acceptable use policy, employee acknowledgment, incident response plan, and insurance renewal summary — all customized to your business.

Get the AI Governance Kit — $29

Common Mistakes to Avoid

×

Only assessing the tools you know about

Shadow AI is the biggest risk. Survey your team. Check browser extensions. Look at expense reports for AI subscriptions.

×

Skipping free tools because “they're just free”

Free AI tools often have the weakest data protections. Your insurer doesn't care if the tool was free — they care about the risk it creates.

×

Assessing once and filing it away

AI vendors update their terms, models, and data practices frequently. Set calendar reminders based on your risk tiers (annual, semi-annual, quarterly).

×

Using an enterprise framework and giving up halfway through

A completed simple assessment beats an abandoned comprehensive one. The 10 questions above cover what matters. Do those well.

Frequently Asked Questions

What is an AI vendor risk assessment?

An AI vendor risk assessment evaluates the AI tools your business uses against key criteria: data handling, security, output liability, and compliance. For small businesses, a simplified 10-question framework replaces the 200+ question enterprise versions while producing the documentation insurers look for.

Why do small businesses need to assess AI vendors?

Insurers are adding AI exclusion endorsements to commercial policies. Businesses that demonstrate vendor due diligence retain full coverage. Beyond insurance, vendor assessment prevents data breaches, output liability, and regulatory penalties.

How often should I reassess my AI vendors?

Annually for low-risk tools, every six months for medium-risk, and quarterly for high-risk vendors handling sensitive data. Always reassess before your insurance renewal.

Do I need a vendor risk assessment for free AI tools?

Yes. Free tools often carry more risk because they have weaker data protections and may use your inputs for training. Your insurer evaluates all AI tools equally, regardless of cost.

The Bottom Line

AI vendor risk assessment sounds like something only enterprises need to worry about. It's not. If your business uses AI tools — and it almost certainly does — you need to know what those tools do with your data, who's liable when something goes wrong, and whether your insurance still covers you.

The 10-question framework above takes an afternoon to complete for your entire tool stack. The vendor assessment records it produces become part of your AI compliance documentation — which your insurer, your broker, and your clients will increasingly expect you to have.

Start with the tools your team uses most. Complete the assessment. Document the results. You'll reduce your actual risk, improve your insurance position, and — most importantly — actually know what your AI vendors are doing with your data.

Related Reading

About CoverMyAI: We help small businesses protect their insurance coverage in the age of AI. Our tools map your AI usage to real underwriting criteria so you can govern AI with confidence — not guesswork. More articles →