Why AI Projects Fail (And How a Checklist Prevents It)
In three years of AI consulting, I've watched 40+ projects fail. Not because the AI didn't work — but because of the 20 things nobody checked before launch.
Client data wasn't clean. The model couldn't explain its decisions to regulators. Nobody trained the end users. The integration with Salesforce broke in week two.
This is the pre-launch checklist I now require for every project. 120 points. Non-negotiable.
The 5 Categories
Category 1: Data Readiness (30 checks)
The AI is only as good as your training data. Most clients overestimate their data quality by 50-80%.
Key checks:
- Data completeness: Is there enough data for the use case? (min 1,000 examples for classification)
- Data freshness: When was it last updated? Stale data = stale model
- Data format consistency: All dates in same format? Units standardized?
- PII audit: Is personally identifiable information properly handled? (GDPR/CCPA)
- Labeling quality: If labeled data — who labeled it, and what was inter-rater reliability?
- Bias audit: Does the data represent all relevant demographics/cases?
Category 2: Technical Infrastructure (25 checks)
Where will this run? Can it handle load? Will it survive a server restart?
Key checks:
- Hosting environment: Cloud vs on-prem decision documented
- Latency requirements: What's the acceptable response time? (real-time vs batch)
- API rate limits: Have you accounted for third-party API limits?
- Fallback behavior: What happens when the AI fails or is unavailable?
- Monitoring: Is there alerting for accuracy degradation?
- Rollback plan: Can you revert to the pre-AI workflow within 30 minutes?
Category 3: Integration & Testing (20 checks)
How does this connect to existing systems? Did you test the edges?
Key checks:
- Integration mapping: Every system that touches this AI documented
- End-to-end test: Full production workflow tested with real data
- Edge cases: What happens with null inputs? Malformed data? Adversarial inputs?
- Load test: Can it handle 3x peak traffic?
- User acceptance testing: Have actual end users tested it?
Category 4: Compliance & Legal (20 checks)
This is where projects die after launch.
Key checks:
- Explainability requirement: Does your industry require the AI to explain its decisions? (Healthcare, finance, hiring)
- Audit trail: Is there a log of every AI decision for compliance review?
- Terms of service: Does your AI use third-party models? Have you reviewed their enterprise ToS?
- Liability documentation: Who is responsible when the AI is wrong?
- Insurance: Does your E&O coverage include AI-caused errors?
Category 5: Change Management (25 checks)
The tech works. Nobody uses it. This kills more projects than technical failure.
Key checks:
- Stakeholder alignment: Has executive sponsor signed off on scope and success metrics?
- End-user training: Completed (not scheduled — completed)
- Workflow documentation: Updated SOPs for the new AI-assisted process
- Resistance mapping: Who will resist this and why? Mitigation plan?
- Success metrics: Specific, measurable KPIs with a 90-day review date
The Full Checklist
The complete 120-point checklist with subcategories, explanations, and a scoring system is available as a download at wedgemethod.gumroad.com.
It includes a project health score calculator, a stakeholder sign-off template, and a go/no-go decision matrix.
The Rule
I don't let clients skip this. A project scoring under 80% on the pre-launch checklist gets delayed until it passes. Every time I've bent this rule, I've regretted it.
The checklist is the project. Everything else is execution.
Top comments (0)