More automations running. More agents deployed. More pipelines humming in the background.
I run about a dozen automated jobs. Daily briefings, proposal generation, content pipelines, data syncing, monitoring alerts. They handle a lot.
But the biggest improvement to my workflow this year wasn't adding more automation. It was getting honest about where my thinking actually matters.
You Have a Token Budget Too
LLMs have context windows. Feed in too much noise and the signal degrades. The output gets worse even though you gave it more to work with.
Human attention works the same way. I have maybe 4 good hours of focused thinking per day. When I spend those hours reviewing cron output or formatting documents or triaging alerts that resolve themselves, I'm burning tokens on low-value work.
The quality of my actual decisions goes down. Not because the decisions got harder, but because I already used up my thinking budget on stuff that didn't need me.
Where I Stopped Spending
I used to review my morning briefing line by line. Check every data point, verify every summary. Then I realized: if the briefing is wrong, I'll notice when the information doesn't match reality later that day. The cost of a slightly wrong briefing at 6:30 is near zero. The cost of spending 20 minutes checking it every morning is real.
Same with monitoring. I had alerts for everything. Cache refreshes, API response times, sync completions. Most of them were informational, not actionable. I stripped it down to alerts that require a decision: something broke, something is about to expire, something needs my approval before it touches an external system.
Data syncing runs on a schedule. If it fails, I get one alert. I don't watch it run. I don't check the logs unless the alert fires.
First drafts of anything. Cover letters, content outlines, research summaries. The AI produces a version. Sometimes it's good enough. Sometimes I rewrite half of it. But I never start from a blank page anymore, and that alone saves the hardest type of thinking: getting started.
Where I Still Spend Every Token
Scoping client work. An AI can research a company, summarize a job posting, draft a proposal. But deciding whether the project is actually worth pursuing? Whether the client's problem is what they say it is? That's pattern recognition built from years of seeing projects go sideways. No automation for that.
Choosing what to build next. I have a backlog of 50 things I could automate, improve, or ship. The AI can't tell me which one moves the needle this week. That decision depends on context it doesn't have: what conversations I had yesterday, what I'm optimizing for this month, what feels right.
Anything with my name on it that reaches another person. Proposals get edited. Posts get rewritten. Client messages get reviewed word by word. The AI drafts. I decide what actually represents me.
System design decisions. Where to draw the boundary between automatic and manual. What gets a human checkpoint and what runs unsupervised. These are the highest-leverage decisions in any AI system, and they're entirely human.
The Honest Ratio
Maybe 20% of my working hours involve focused, high-stakes thinking. The rest is execution, coordination, and maintenance.
Before I built these systems, that ratio was reversed. 80% thinking, 20% execution, and half the thinking was on tasks that didn't deserve it.
The goal was never "automate everything." It was "protect the 20% that matters and make sure I'm not exhausted when I get there."
The Shift
This isn't about working less. I work the same hours. But the distribution changed.
I spend less time on decisions that don't compound. I spend more time on the ones that do. Client relationships, system architecture, strategic bets. The stuff where being sharp at 10 in the morning instead of burned out from triaging alerts actually changes the outcome.
The question isn't how much your AI can do. It's whether you're spending your own thinking tokens on the right things.
Where are you still spending attention that you probably shouldn't?
I help teams figure out where AI should run unsupervised and where humans still need to be in the loop. If that's a question your team is working through, let's talk: cal.eu/reneza
Top comments (0)