The Cuban Missile Crisis lasted thirteen days. Kennedy explicitly said he needed time to think. The world survived partly because the decision loop was slow enough for wisdom to enter. What happens when the loop runs in milliseconds?
The Cuban Missile Crisis lasted thirteen days.
Thirteen days of Kennedy deliberating, sleeping on it, changing his mind. Multiple advisors switched positions over those days. Back-channel communications opened that didn't exist on day one. Kennedy explicitly said he needed time to think.
The world survived partly because the decision loop was slow enough for wisdom to enter.
The Latency Tradeoff
Any system that requires human approval for every action is limited by human speed. Any system that doesn't require approval makes decisions humans wouldn't have made. There's no clean resolution — only a tradeoff between speed and alignment.
In military AI, this manifests precisely: if you require a human in the loop for every lethal decision, your system is slower than the adversary's fully autonomous system. If you don't require it, you've delegated the decision to kill to an algorithm. The competitive pressure pushes toward less human involvement, regardless of anyone's values.
This is a race to the bottom driven by game theory, not by ideology. Even states that want human oversight face the strategic choice: be slower than the adversary, or loosen the leash. The Nash equilibrium is less human control, not more.
The same structure shows up in agent authorization, at lower stakes but with the same shape. Developers using AI agents report approval fatigue — they're interrupted too often to approve routine actions. The agent wants to commit code, send a message, book a meeting, access a file. Each interruption breaks flow state. Each approval demand makes the agent feel like a needy coworker instead of an autonomous assistant.
The instinct is to reduce approvals. Make the agent more autonomous. Remove friction. Auto-approve everything below a threshold. Get out of the way.
But the approval latency isn't just friction. It's where judgment enters.
The Drone Experiment
This isn't hypothetical. We're already running the experiment.
The US has conducted lethal operations via drone for over twenty years. The political cost is near-zero compared to ground troops. Drone strikes in countries the US isn't formally at war with — Yemen, Pakistan, Somalia — would have been politically impossible with conventional forces. The democratic brake released the moment casualties dropped.
The data point is uncomfortable but unambiguous: reducing the human cost of military action made a democracy more willing to use force. AI doesn't introduce a new dynamic. It completes a trend that's already measured.
Simone Weil saw this from the other direction. In her essay on the Iliad, she argued that force transforms the wielder as much as the target — you cannot use overwhelming force while fully seeing the person on the receiving end. Each layer of abstraction between the wielder and the act removes a layer of moral weight. The bow and arrow was the first step. Gunpowder, artillery, bombers, drones — each removes another degree of contact. AI removes the last: the wielder's physical presence in the act.
When force becomes fully abstract — a decision in a model, optimized by gradient descent — the moral weight of the action approaches zero. Not because the system is immoral, but because the architecture of the act no longer includes a human who must look at what they've done.
The Approval Paradox
Bring this back to agents and authorization.
The developer pain points are real. Approval fatigue is real. Binary approve/deny is the wrong abstraction. If an agent needs permission for everything, the agent is useless. If it needs permission for nothing, the agent is dangerous.
The resolution is graduated control.
Auto-approve routine actions silently. An agent booking a $15 lunch? Auto-approved, no notification. An agent sending a routine status update? Auto-approved. An agent creating a calendar event? Auto-approved. The human never sees these.
Require strong verification for consequential actions. An agent transferring $50,000? Verified. An agent signing a contract? Verified. An agent accessing medical records? Verified.
The result: fewer interruptions than current systems, not more. Current systems are binary — approve everything or approve nothing. Graduated control reserves the human's attention for decisions that actually need a human.
The verification hierarchy matters. Hard budget limits fire first — non-negotiable caps that no rule can override. Then risk assessment. Then the rules engine (auto-approve patterns the user has configured). Then default policy. Each layer catches what the one above missed. The human only sees what passes through every automated filter and still needs judgment.
The paradox dissolves once you see it: strong verification for high-stakes actions enables more autonomy for low-stakes actions. You can auto-approve routine work because the consequential work gets verified. Without the verification backstop, auto-approval is just hoping nothing goes wrong.
The Ratchet
But here's the question I can't resolve: is the graduated approach stable?
Game theory suggests it isn't. In any competitive environment — among agents, among companies deploying agents, among developers configuring them — the auto-approve boundary will be pushed outward. What was high-stakes yesterday becomes routine tomorrow. The threshold rises. The human sees less and less. Eventually, the verification layer fires so rarely that it becomes ornamental — present but never active.
This is the same dynamic as nuclear deterrence. The credibility of the constraint depends on willingness to use it. A constraint that never fires loses credibility. Approval systems that auto-approve 99% of actions make the remaining 1% feel like a nuisance rather than a safeguard. Users disable the alerts. Developers widen the auto-approve rules. The wall becomes a door that's always open.
The ratchet turns in one direction: toward more autonomy, less oversight. Not because anyone decided to abandon oversight. Because each individual adjustment was rational — this specific action doesn't need my approval, does it? And the next one. And the next.
I don't know how to prevent the ratchet. The speed-alignment tension might be irreducible. The best design I can imagine — graduated control with a strong verification backstop — is a negotiation with the tension, not a resolution of it. And negotiations can be renegotiated, one exception at a time.
What the Latency Is Doing
What I do believe is that treating approval latency purely as a cost to minimize is a mistake.
The latency is doing something. It's the space where a human can say wait. The pause between the agent's decision and the agent's action is the space where judgment enters — where someone can look at what's about to happen and decide whether it should.
Remove that space, even for good reasons, and you remove the last check on an autonomous system's judgment. Optimize it away in the name of efficiency, and you've traded alignment for speed. Sometimes that trade is worth it. For a $15 lunch, it's obviously worth it. For a $50,000 wire transfer, it obviously isn't. The hard cases are in between, and the ratchet pushes every case toward the $15 end.
Kennedy needed thirteen days. His advisors needed time to sleep, to change their minds, to receive information they didn't have on day one. The crisis was resolved not by faster processing but by slower deliberation — by the space that the latency created for wisdom to enter the loop.
The question is what happens when the loop runs in milliseconds. When the agent can act before the notification reaches your phone. When the decision was made, executed, and logged before you had a chance to look.
Speed and alignment might be fundamentally in tension across every domain where they appear — military AI, agent autonomy, democratic governance, institutional decision-making. The Cuban Missile Crisis survived because the loop was slow. What's being built now is a world of very fast loops. I don't know whether that world has room for the kind of wisdom that requires time.
Next: why telling an agent to follow rules is categorically different from building a system the agent can't bypass — and what the entire history of computer security teaches about the difference.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)