DEV Community

Cover image for Blind Burn: What Happens When AI Builds Faster Than You Can Track
Phil Rentier Digital
Phil Rentier Digital

Posted on • Originally published at rentierdigital.xyz

Blind Burn: What Happens When AI Builds Faster Than You Can Track

The other day, I tried to list everything running on my servers. Not because something broke. Just to know.

I couldn't.

There are crontabs on the VPS machines. Scheduled workflows in n8n. Supabase pg_cron jobs firing every six hours. Convex scheduled functions I set up in February and haven't thought about since. GitHub Actions that trigger on events I barely remember defining. API calls going out every hour to services I'm not even sure I still need. All of it built with Claude Code. All of it working fine.

And me, the guy who built the plane, unable to tell you what's in the cargo hold.

Two weeks ago, Addy Osmani (the Google engineer behind Chrome DevTools, Lighthouse, and Core Web Vitals) put a name on something close to this. He called it comprehension debt: the growing gap between what your codebase contains and what any human actually understands. His article went everywhere. It's about code. This one is about what's running. Because code debt sits in your repo and waits. The infrastructure version doesn't wait. It charges you by the hour. I'm calling it blind burn.

TLDR: AI lets you build and automate 100x faster. Your understanding of what's actually running doesn't keep up. This article names the problem (blind burn), shows what it costs, and gives you 4 rules to stop flying blind.

AI Builds Fast. Your Map Doesn't Update.

Claude Code is absurdly good at building things. You describe a workflow, it builds the workflow. You ask for a cron job, you get a cron job. You say "automate the distributor feed sync," and twenty minutes later there's a scheduled pipeline that fetches, transforms, and pushes product data to three distribution channels.

It works. You move on. That's the trap.

Every "build me a workflow" adds a scheduled task you'll forget in three weeks. Every "add a cron for the inventory update" is another line in a crontab you won't read again. I turned Claude Code into my n8n architect a few months ago. Every workflow it built (good ones, solid ones) added scheduled triggers to a maze I was no longer mapping.

And I felt productive the whole time. That's the part that gets you. You're not slacking off. You're shipping. You're building real things that work in production. You just also happen to be creating a graveyard of timers, triggers, and cron entries that nobody will ever review again.

Admit it, you do the same thing.

To be clear: the tool is not the problem. Claude Code does exactly what you ask, and does it well. The problem is me not keeping a map. And I didn't keep one because when you're building at 100x speed, cartography feels like a waste of time.

It's not.

What Blind Burn Actually Costs

Osmani's concept is sharp. Teams ship AI-generated code faster than anyone can read it. Tests pass, PRs look clean, and the gap between "deployed" and "understood" grows in silence.

But code debt is lazy. It sits in your repo being ugly and inert, waiting for someone to touch it. The infrastructure version has a meter running.

Every orphan cron job is a micro-invoice. Every forgotten API timer is bandwidth on your card. Every scheduled function firing into a service you deprecated last month is compute billed while you sleep. That's blind burn. It doesn't announce itself. It accumulates, like a subscription you forgot to cancel, except it's forty of them.

The signals are starting to show up. @cmd_alt_ecs on X, a few weeks ago: 60+ cron jobs, $50 a day going to agents doing nothing useful anymore. @tahseen_rahman: 15 autonomous crons running for two months, the bottleneck became self-healing at 3 AM. @aleks_blanche, maybe the sharpest take: "without a control plane, you don't have agents. You have automation debt."

Osmani talks about enterprise teams with dozens of engineers. My scale is indie: one person, a few VPS boxes. But the mechanism is identical. The speed of production outruns the speed of comprehension. The difference? In an enterprise, someone eventually catches it. When you're solo, nobody audits your mess for you.

Monitoring Answers the Wrong Question

So you set up Cronitor. Or Better Stack. Or Healthchecks.io. Good for you. You now know if a job ran.

You still don't know if it should exist.

It's like having a receipt for every purchase you made last year but no idea if you still use any of it. Monitoring tracks execution. What you need is visibility into purpose. Whether the job still makes sense. Whether it duplicates something else. Whether the API it calls even returns useful data anymore or just 200 OKs to an endpoint nobody maintains.

That question ("should this job exist?") requires context only you have. Or had, six months ago, before you shipped fourteen more automations and stopped thinking about it.

No SaaS will answer that for you.

What I Built to See My Own Ecommerce Stack Again

So I built a control panel. Not a SaaS. Not a side project for ProductHunt. A survival tool, for a WooCommerce pipeline that got out of hand because Claude Code is too good at its job.

First thing it gives me: a weekly calendar view of every scheduled job across all systems. Not a list in a terminal. A visual grid showing what fires when, color-coded by type (product indexing, distributor feed sync, price monitoring, order reconciliation, inventory updates, partner API calls). One glance and I see the problems. The 4 PM block where six syndication jobs all fight for the same API quota. The Thursday morning cluster that makes no sense anymore because I killed the upstream data source two months ago. The Tuesday gap where nothing runs between 8 and 14, which is either fine or a broken trigger I never noticed.

Weekly job scheduler calendar with color-coded AI automation tasks showing scheduling conflicts and resource collision cluste


Job scheduler revealing overlapping AI tasks and hidden scheduling inefficiencies across the week.

Second thing: an API cost tracker. Every external call, every provider, every dollar. Last month: $14.46 total across 598 API calls on three providers. Not a scary number. But the value isn't the total. The value is seeing that one provider eats 68% of the budget for a task I could handle locally. And catching it before it drifts, not after.

API cost tracking table showing monthly expenses comparison across DataForSEO, OpenRouter LLM, and RapidAPI providers with ca


Monthly API spending breakdown across three providers reveals cost distribution and usage patterns.

Now think about the guy at $50 a day. Different scale, same blindness. His jobs didn't start expensive. They accumulated. Without a dashboard, $2 becomes $10 becomes $50 and you never see the curve because there is no curve to look at. Just a credit card statement at the end of the month and a vague feeling that something is off.

I had that feeling for weeks before I built the panel. My wife would have called it intuition. I call it looking at my Hetzner invoice with one eye closed, like checking your weight after Christmas. 😬

Nobody sells comprehension of YOUR system. You have to build it yourself.

The Control Tower Framework: Four Rules to Stop Flying Blind

After cleaning my own mess, I distilled the approach into four principles.

Rule 1: Inventory everything in one place. Crontabs on VPS. n8n scheduled workflows. Supabase pg_cron. Convex scheduled functions. GitHub Actions. App-level timers. All of it, one list. A job that isn't in your inventory doesn't exist for you. It exists for your invoice, though. And your invoice has better memory than you do.

Rule 2: Visualize by time, not by tool. A list of cron jobs per machine is useless for spotting problems. You need a weekly calendar showing WHEN things fire, not a table of WHERE they live. Time is the axis that reveals collisions, gaps, and zombies. Tools don't collide with each other. Schedules do.

Rule 3: Track cost per job, even rough. How many API calls does this job trigger? How much compute does it eat? If you can't ballpark what a job costs, you can't decide if it's worth running. The number doesn't need to be precise. It needs to exist. Zero visibility is how $2/day becomes $50.

Rule 4: Audit quarterly. Three questions per job. Does it still run? Does it still serve a purpose? Can anyone (meaning you) explain why it was created? One "no" and the job is a zombie. Kill it. Dead crons don't send invoices.

The same spec-first discipline I apply to code with prompt contracts (define the contract before you let AI execute) works for infrastructure. Spec what should run. Track what does run. Diff the two.

Your infra deserves the same rigor as your codebase.

Probably more. It's the part that charges you money.

The Next Bottleneck Isn't Building, It's Understanding What You Already Built

Next year, everyone will have Claude Code. Everyone will know how to automate. And we'll start hearing about the first AI burnouts. Not human ones. Infrastructure ones. The blind burns.

The next competitive edge won't be building faster. It'll be understanding what you already built.

AI gave you the construction superpowers. The comprehension ones are on you.

Sources

Addy Osmani, Comprehension Debt: The Hidden Cost of AI-Generated Code (Medium, March 2026)

(*) The cover is AI-generated. The servers in the picture understand themselves better than I understand mine.

Top comments (0)