We fear AI. What it will take from us. It will take our jobs, knowledge, and problem-solving abilities away from us. We work tirelessly to understand what AI will give to us, and what AI will have gotten from us.
Agentic AI, objectively, is a multiplier in both the positive and negative sense.
A multiplier cuts both ways. Poor practices implemented 10x faster means we see rapid degradation. Good practices implemented 10x faster means we have room to refine them, measure them, and grow into better ones with increasing accuracy. Same tool, opposite outcomes. The difference is what we choose to multiply.
The Mechanic
If you take a mechanic and ask him to perform a job with rudimentary tools, he can perform the job as-needed. If you give that same mechanic better tools, someone to bounce ideas off of, and the chance to plan their work a little bit better, output speeds up and the quality slightly increases. If that same mechanic becomes lazy and begins handing off all their tasks to their person, and doesn't take care of their new tools, and decides that planning is taking too long, their quality severely decreases but their output speed remains the same.
The Formula
I'd like to think that this can be simplified to an explainable formula. This phenomenon is explained in our industry as "Automation Bias" or "Agentic Decay."
The Variables
To model this, we need to define the core variables that govern an agent's behavior:
- C_base = Core Capability. The base intelligence of the underlying model/person.
- T = Tool Integrity. The quality, precision, and maintenance of the tools available.
- P = Planning Depth. The effort spent on reasoning before acting.
- D = Delegation Rate. How often the agent hands off tasks to external tools or sub-agents instead of doing the work natively.
- V = Verification. The oversight or evaluation applied to the tool's output before finalizing the response. Self-correction loops, human-in-the-loop.
The formula that determines the quality of output then becomes:
Q = C + (P * T) - D(1 - V)
The Bonus: Good tools multiplied by good planning (P * T) elevates the output far beyond the base capability.
The Trap: If verification drops to zero (V = 0), the delegation penalty completely subtracts from the agent's baseline. It relies heavily on tools it is no longer checking, resulting in fast but severely degraded output.
The Human Condition
The one thing that AI lacks. Our accumulated experience could never become useful to have in AI. Humans will never benefit from having an LLM that has experienced humanity. This is why AI companies spend so much time developing "tools." That's why AI companies also don't share their tools. It's why developers express among each other that Claude Code has better "tool calling" than Codex does. Finally, that's also why us Windows users keep getting complaints from our AI agents that we don't have rg installed.
We benefit most from having an AI agent because of its ability to compute. Data centers are trying to solve the problem of being able to keep up with compute power, while we are all working on trying to have AI do more and more computing for us. That inhuman ability to quickly recognize patterns and process information is what is most desirable about our AI assistants.
This means that the driver needs to be human. If there is no reason for AI to be human, then there is 100% the need for AI to work with and for humans.
The Archi-tech-t (get it)
If the human has additional computational power, their ability to be creative in their solution is expanded. The potential for better design has increased. Our planning depth has grown. Most people don't recognize that their code solutions drive them away from the creative part of completing a task.
Think about the last time you sat down to solve a problem and immediately started typing. You skipped the part where you sketch it out, argue with yourself about the approach, or wonder if the problem you're solving is even the right one. The typing felt like progress, but most of it was translation work. Syntax. Boilerplate. The stuff a compiler used to complain about.
That's the part the AI is good at. Which means the part that was left — the part that was always the actual job — is the architecture. The design. The why before the how. Hand off the translation and suddenly there's room to think again.
The architect was always the role. We just didn't have time for it.
Asleep at the Wheel
Bring back D(1 - V).
If humans are the drivers, what happens when we suffer from automation bias on a societal scale? Easy. We become the lazy mechanic. Not the agent. Us. We stop checking the tools. We stop planning. We accept the output because it came back fast, and fast feels like progress.
The penalty in the formula stops being about the AI at that point. It's about us. V drops to zero on the human side of the equation and the delegation penalty starts eating our baseline instead of the agent's.
Here's the thing though: we're never going to win a compute race against a data center. We shouldn't want to. Our job isn't to compete with the machine. It's to stay awake at the wheel, keep the tools sharp, and actually read what comes back.
The Empath
If AI handles the compute, the valuable human skills become the ones AI can't compute.
Empathy. Edge-case intuition. Understanding what the end user is actually trying to do versus what they asked for. Knowing when a technically-correct answer is the wrong answer because the person on the other end is tired, frustrated, or working around a constraint they forgot to mention.
That's the shift. The talent in the industry is separating: computation on one side, and being human, creative, and thoughtful on the other. Both still matter, but they're no longer the same job. The next frontier isn't writing better code. It's writing better human-to-human solutions, with AI as the lever.
Point the compute at the right problem, and the rest is a very human job.
Top comments (0)