How to Build an AI Policy for Your Team (Before Someone Pastes Client Data into ChatGPT)
Your team is already using AI. The question is whether they're using it safely. Here's how to write a one-page AI policy that protects your business today.
One of our clients found out the hard way. An employee — well-intentioned, working late, trying to get through a backlog — pasted an entire client financial statement into the free version of ChatGPT. Not to steal anything. Not to be careless. Just to get a summary faster.
That data went to OpenAI’s servers on a consumer account with no enterprise data agreement. No audit trail. No way to delete it.
Nobody was fired. But it took three uncomfortable phone calls and a revised client agreement to clean up.
This is what happens when you don’t have a policy. Not because people are reckless — because they’re resourceful.
Your Team Is Already Using AI
Let’s skip the debate about whether your team should use AI. They already are.
According to IBM’s 2025 research, 79% of Canadian office workers use AI tools at work, but only one in four uses an enterprise solution provided by their employer. The rest are on personal ChatGPT accounts, free Claude tiers, browser extensions, and whatever showed up in their LinkedIn feed last Tuesday.
This is shadow AI. And the problem isn’t the AI itself — it’s that nobody told your team what’s safe and what isn’t. So they guessed.
We’ve seen teams where four different people use four different AI tools. One person drafts client emails in ChatGPT. Another uses Claude. A third uses Gemini. The fourth uses some AI Chrome extension nobody’s heard of. The result? Inconsistent tone, inconsistent quality, and zero visibility into what data went where.
A policy fixes this. And it doesn’t have to be painful.
A Good AI Policy Fits on One Page
Here’s the thing most companies get wrong: they try to write an AI policy that covers every possible scenario. It ends up being 12 pages of legal language that nobody reads and everyone ignores.
Don’t do that.
A good AI policy fits on one page. It covers four things. If your team can read it in five minutes and know exactly what to do, you’ve nailed it.
The Four Things Every AI Policy Should Cover
1. Approved Tools
Be specific about which AI tools are OK for work use.
Yes: Company ChatGPT Team or Enterprise account, Microsoft 365 Copilot, company-provisioned Claude Pro workspace.
No: Personal free ChatGPT accounts, personal Claude accounts, random AI browser extensions, any tool where the company doesn’t have a business agreement.
The distinction matters. Business and enterprise tiers from OpenAI, Anthropic, and Microsoft include data processing agreements, admin controls, and contractual commitments that your data won’t be used for training. Free personal accounts? None of that.
Name the tools. “Use approved AI tools” is vague. “Use our company ChatGPT Team workspace at [link]” is actionable.
2. Data Rules
This is the most important section. What information can your team actually put into AI tools?
Use a simple green/yellow/red system:
Green — go ahead. Internal brainstorming, marketing draft ideas, writing outlines, general research questions, publicly available information. “Help me outline a blog post about supply chain trends” is fine.
Yellow — proceed with caution. Anonymized client scenarios (no names, no identifying details), industry research summaries, internal process documentation. Strip out anything that identifies a specific person or company before you paste it in.
Red — never. Client names, financial data, contracts, employee records, health information, passwords, proprietary code, anything covered by an NDA. If you wouldn’t post it on LinkedIn, don’t paste it into an AI tool.
Print this on a card. Stick it next to people’s monitors. Make it impossible to forget.
3. Quality Review
This one’s simple: all AI output must be reviewed by a human before it goes to a client, into a contract, or onto a public channel. No exceptions.
AI is fast, but it’s not always right. It hallucinates. It makes confident-sounding claims that are completely wrong. It can produce content that’s subtly off-brand or factually outdated.
Your team should treat AI output like a first draft from an intern who’s very enthusiastic but occasionally makes things up. Review it. Edit it. Own it. If your name goes on it, you’re responsible for it — not the AI.
This isn’t about slowing people down. It’s about making sure the speed boost from AI doesn’t come at the cost of your reputation. For more on this, see our post on AI slop vs. thoughtful AI.
4. What to Do When You’re Not Sure
Every policy needs an escape valve. Name a specific person — not a department, not a committee, a person — who your team can ask when they hit a grey area.
“Hey, can I use AI to summarize these vendor proposals?” That’s a reasonable question, and someone should be able to answer it in five minutes, not five days.
Make it a Slack message, not a formal request. The easier it is to ask, the more likely people will ask instead of guessing. And guessing is how data ends up in places it shouldn’t be.
Roll It Out Like You’re Enabling, Not Policing
The biggest mistake companies make with AI policies is framing them as restrictions. “Here’s what you can’t do” puts people on the defensive and drives AI use underground — which is the opposite of what you want.
Instead, frame it as: “Here’s how to use AI safely so you can keep using it.”
Run a 30-minute session. Walk through the one-pager. Show examples. Answer questions. Make it clear that the goal is to protect everyone — the company, the clients, and the employees themselves — so that AI stays a tool they can rely on.
People don’t resist reasonable guardrails. They resist being treated like they can’t be trusted. Lead with trust, back it up with clarity.
If you want more on this approach, we wrote a full breakdown in AI Governance 101 and a deeper look at the data leakage problem specifically.
Review It Quarterly
AI changes fast. Six months ago, most people hadn’t heard of half the tools your team is using today. New tools launch constantly. Pricing tiers change. Features that were enterprise-only become free (and vice versa).
Set a calendar reminder to review your AI policy every quarter. Fifteen minutes. Update the approved tools list. Check if your data classification still makes sense. Ask your team what questions have come up.
A policy that’s six months out of date is almost as bad as no policy at all.
For a finance-focused lens on keeping AI governance current, check out AI Governance for CFOs.
You Can Write This Policy Today
That’s not an exaggeration. Open a document. Write down your approved tools. Add the green/yellow/red data classification. Add the quality review rule. Name the person to ask. You’re done.
One page. Five minutes to read. A massive reduction in risk.
The companies that get AI governance right aren’t the ones with the longest policy documents. They’re the ones that made it simple enough for everyone to actually follow.
Need help setting up AI governance for your team? We help businesses implement AI with proper guardrails built in — so your team gets the productivity boost without the risk. Let’s talk.