I'm a PM. My CEO kept asking: "Are these AI tools actually safe?"
I didn't have a good answer. So I went and built one.
Here's what I found — and what most startups get wrong.
The core problem
Most teams adopt AI coding tools like this:
- Dev asks "can I use Cursor?"
- CEO Googles it briefly
- CEO says yes
No one checks training defaults. No one verifies whether source code leaves the environment. No one sets up audit logs.
Then something goes wrong and there's no paper trail.
10 questions to run before you approve any AI tool
1. Is this tool training on our code?
| Tool | Default | How to opt out |
|---|---|---|
| Cursor Personal | ON | Upgrade to Business |
| ChatGPT Free/Plus | ON | Settings → Data Controls → toggle off |
| Claude API / Team | OFF | Already off |
| Notion AI | OFF | Per policy |
Action: Verify each tool. Screenshot the setting. Save it in a doc.
2. Does source code leave our environment?
Cursor and Copilot send code context to servers on every completion. That's how they work — there's no offline mode.
ChatGPT: only if your dev manually pastes code into the chat.
This matters even if training is OFF. If the vendor gets breached, your code is exposed.
3. How long is data retained?
- Cursor: ~30 days, deletion available on request
- ChatGPT: until you delete the conversation
- Claude API: not retained after the request completes
If you can't find "Data Retention" in their Privacy Policy in 5 minutes — that's a red flag.
4. If there's a breach, how fast do they notify you?
90% of CEOs never ask this. GDPR requires 72 hours — but only if you're in scope.
Search "incident notification" in their Terms of Service. No clause = no contractual obligation to tell you anything.
5. Are your devs using personal accounts for work?
Personal ChatGPT free = training ON, no audit logs, no way to revoke access when they leave.
This is the most common problem. Most teams have it and don't know.
Fix: Mandate company plans. No personal accounts for work AI tools.
6. Do you have audit logs?
- Personal/Free plans: No logs
- Team/Business plans: Basic logs
- Enterprise plans: Full logs
No logs = no way to reconstruct what happened during an incident.
7. Can you delete your data if you leave?
Test this before committing. Create an account → use it → request deletion → see if they confirm clearly.
Some vendors confirm in days. Others are vague. If you can't get written confirmation — assume the data stays forever.
8. Do they have SOC 2?
- Cursor: Yes (SOC 2 Type II)
- ChatGPT / OpenAI: Yes
- Claude / Anthropic: Yes
- Notion: Yes
No certification = you're trusting their self-assessment, not an external audit. Not automatically a dealbreaker — but you should know.
9. Who owns the AI-generated output?
Cursor, ChatGPT, Claude: you own the output per current Terms of Service.
Unresolved edge case: if two companies generate nearly identical code from the same prompt — who owns it? No clear case law yet.
10. Does your team know any of this?
The best policy is useless if only the CEO has read it.
Fix: 30-minute team briefing. One doc listing approved tools, prohibited tools, and required account type. Add it to your onboarding checklist.
What to document after the checklist
For each approved tool, record:
- Tool name
- Plan/tier (Personal, Team, Business, Enterprise)
- Whether training is OFF
- Whether audit logs are available
- Date last reviewed
Keep this in a shared doc and review every 6 months or when a vendor changes their policy.
Full version with FAQ and more detail: aipolicydesk.com/blog/ceo-ai-tool-approval-checklist
How does your team handle AI tool approvals? Is there a process, or is it mostly "dev asks, CEO says yes"?
Top comments (0)