AI checked the code. Checked the tests. Checked the code again. Everything looks fine—but the bug persists.
Sound familiar?
The problem isn't that AI is bad at analysis. The problem is outside AI's context entirely.
When "Check Harder" Doesn't Work
Without additional context, AI can enter a loop:
- Check the code → looks fine
- Check the tests → looks fine
- Check the code again → still fine
- Check the tests again → still fine
- Stuck
The problem exists. AI can't find it. Not because AI is bad at analysis—because the cause is outside its context.
What AI Can't See
AI has a field of vision. It sees what's in context: code, requirements, conversation history.
What it doesn't see: everything outside that context.
AI's visible context:
┌───────────────┐
│ AI's Context │ ← AI searches here
│ │
│ (code) │
│ (tests) │
│ (logs) │
└───────────────┘
The blind spot remains dark.
With human guidance:
┌───────────────┐
│ AI's Context │
│ │
│ (code) │
│ (tests) │
│ (logs) │
└───────────────┘
│
▼ "Also consider X"
┌───────────────┐
│ Illuminated │ ← Now visible
│ blind spot │
└───────────────┘
You're not telling AI how to analyze. You're showing it where to look.
Case Study: The OHLC Bar Test Mystery
Real example from financial data processing.
Situation:
- Building OHLC (Open-High-Low-Close) bar aggregation
- 1-minute bars: tests pass ✓
- 5-minute bars: tests fail intermittently ✗
AI's Response:
The AI checked:
- Aggregation logic → correct
- Time window calculations → correct
- Data structures → correct
- Edge cases → handled
Every review found nothing wrong. The code was logically sound.
But tests kept failing. Sometimes. Not always.
AI was stuck. It had examined everything in its context multiple times. No issues found.
The Human Intervention:
"Could the execution time affect the results?"
This single question injected new context.
The Discovery:
Test data was generated based on system clock time. The code used DateTime.Now to create test fixtures.
- Run at 10:01 → 5-minute window aligns one way
- Run at 10:03 → 5-minute window aligns differently
The test wasn't flaky. It was time-dependent. Same logic, different execution moments, different boundary conditions.
Why AI Missed It:
The system clock wasn't in the conversation. It wasn't in the code review scope. It wasn't mentioned in requirements.
It was outside AI's context entirely.
No amount of "check harder" would have found it. The AI needed someone to illuminate the blind spot.
Context-Outside Events
This pattern has a name: context-outside events.
| In Context | Outside Context |
|---|---|
| Source code | System environment |
| Test code | Execution timing |
| Error messages | Infrastructure state |
| Documentation | Runtime dependencies |
When AI spins on a problem without progress, ask: What isn't AI seeing?
The answer is usually something environmental, temporal, or infrastructural—things that don't appear in code.
Your Job: Expand the Frame
This clarifies what humans uniquely contribute:
| AI Strength | Human Strength |
|---|---|
| Deep analysis within context | Awareness beyond context |
| Pattern matching in visible data | Intuition about invisible factors |
| Exhaustive checking | "What if it's not in the code?" |
You don't need to out-analyze AI. You need to expand the frame.
In Practice: Good vs. Bad Guidance
Good: Expanding Context
"Consider that this runs in a containerized environment
with shared network resources."
"The database connection pool is limited to 10 connections."
"This service restarts nightly at 3 AM."
These add context. They illuminate factors AI wouldn't know to consider.
Bad: Micromanaging Implementation
"Use a for loop, not a foreach."
"Put the null check on line 47."
"Name the variable 'tempCounter'."
These control implementation. They remove AI judgment without adding visibility.
The Difference
| Question | Micromanagement | Scope Expansion |
|---|---|---|
| What are you specifying? | Implementation details | Environmental context |
| What's the effect on AI? | Constrains choices | Expands awareness |
| When is it useful? | Rarely | When AI is stuck |
| What does it add? | Your preferences | Your visibility |
Signs AI Needs Context, Not More Analysis
Watch for these patterns:
- Same checks repeated with same results
- "I don't see any issues in the code"
- Intermittent failures with no pattern
- Works locally, fails in CI
- Passes alone, fails in suite
These all suggest: the cause is outside AI's current context.
Your job: figure out what's outside, and bring it in.
Top comments (0)