After watching AI coding agents fail repeatedly on the same classes of problems, we identified the root causes. Here's what kills most agent runs before they start:
C1 — Incomplete enum handling. Agent references status values that don't exist in the codebase.
C2 — Silent null paths. Optional parameters get skipped silently with no documentation.
C3 — SSE auth pattern mismatch. Browser EventSource can't send custom headers — agent uses wrong auth.
C4 — Unbounded text fields. No truncation on columns that receive full task descriptions or diffs.
C5 — Event/DB race condition. SSE event fires before the DB write completes. Frontend queries empty row.
C6 — Schema/ORM mismatch. SQL type says nullable, ORM field says required.
C7 — Untestable expectations. Test requirements with no implementation path in the spec.
C8 — Non-idempotent inserts. Retry logic creates duplicate rows.
C9 — Hallucinated imports. Module doesn't exist in the codebase.
We now run this as a validation pass after planning and before execution. Catches ~70% of failures before any code runs.
Anyone else building pre-execution validation into their agent pipelines?
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)