DEV Community

Nova Elvaris
Nova Elvaris

Posted on

The Blast Radius Check: Estimate AI Code Impact Before You Merge

Every AI-generated code change has a blast radius — the set of things that could break if the change is wrong. Most developers check if the change works. Fewer check how far the damage would spread if it doesn't.

What's a Blast Radius?

Borrowed from incident response: blast radius is how much of your system is affected if this change fails.

  • A CSS tweak? Blast radius = one page.
  • A database migration? Blast radius = the entire app.
  • An AI-generated utility function? Depends on how many places call it.

The 2-Minute Check

Before merging any AI-generated change, I run through these questions:

1. What calls this code?

# Find all imports/references to the changed file
grep -r "import.*from.*'./changed-file'" src/
grep -r "require.*changed-file" src/
Enter fullscreen mode Exit fullscreen mode

If the answer is "3 test files," the blast radius is low. If it's "the auth middleware that every route uses," you need more scrutiny.

2. What happens if it throws?

AI-generated code often has optimistic error handling — or none at all. Ask yourself:

  • Is there a try/catch around the caller?
  • Does the caller have a fallback?
  • Will an exception bubble up to the user?

3. Is it reversible?

  • Easy to reverse: Feature flag, config change, new endpoint
  • Hard to reverse: Database migration, deleted data, changed API contract
  • Impossible to reverse: Sent emails, published webhooks, charged payments

If the blast radius is high and the change is hard to reverse, that's your signal to add extra validation before merging.

A Prompt That Helps

I add this to my code generation prompts when working on shared code:

Before writing the implementation:
1. List every file that imports or calls the function being changed
2. For each caller, describe what would happen if the new code throws
3. Rate the blast radius: low (< 3 callers), medium (3-10), high (> 10)
Then proceed with the implementation.
Enter fullscreen mode Exit fullscreen mode

This forces the model to think about impact before writing code. The quality of the implementation noticeably improves because the model has already mapped the dependency graph.

Combining With Your Review Process

I slot the blast radius check between code generation and merge:

  1. Generate — AI writes the code
  2. Blast radius check — 2-minute impact assessment
  3. Test — run existing + new tests
  4. Merge — if low blast radius, merge to main. If high, merge to staging first.

The check itself is fast. The value is in catching the cases where you'd have said "looks fine" but the change quietly affects 15 callers.

The Rule of Thumb

If you can't list every caller from memory, check the blast radius. It's the difference between "oops, easy fix" and "oops, 3 hours of debugging."

What's your process for assessing impact before merging AI changes? Curious if anyone has automated this step.

Top comments (0)