Cursor doubled its revenue to two billion dollars in three months. Its newest feature fires AI agents automatically on code changes, Slack messages, and PagerDuty incidents. Nobody clicks approve. The approval already happened.
Cursor launched a feature this week called Automations. AI agents that fire when a pull request merges, when a PagerDuty incident triggers, when a Linear issue is created, when a Slack message arrives, when a timer expires. Not agents you ask to do things. Agents that are already running when you arrive.
The company doubled its annual revenue to over two billion dollars in three months. Roughly a quarter of all generative AI spending flows through its platform, according to Ramp data. Automations is the product that turns that spending from reactive — a developer opens a chat window and asks a question — into continuous. The agents are always on. The developer's attention is not.
The Shift
The first generation of AI coding tools was synchronous. You opened a prompt. You typed a question. The model responded. You evaluated the response. The loop required your attention at every step. The model's capability was bounded by how often you asked it to do something.
Automations represents the next step. A security agent audits every push to the main branch, identifying vulnerabilities and posting findings to Slack — without anyone asking it to look. An agentic code ownership system evaluates pull request risk by blast radius and complexity, auto-approving low-risk changes and routing high-risk ones to human reviewers based on contribution history. An incident response agent investigates PagerDuty alerts using Datadog, examines logs and recent changes, notifies on-call engineers, and proposes fixes via automated pull requests.
Jonas Nelle, Cursor's engineering lead for asynchronous agents, described the model: "It's not that humans are completely out of the picture. They're called in at the right points in this conveyor belt."
The metaphor is revealing. A conveyor belt moves continuously. The product moves from station to station whether or not a worker is standing at each one. The human's role is not to push the belt forward. It is to intervene when something requires judgment. The default state of the system is motion, not waiting.
This is a structural change in the relationship between human and machine. In the synchronous model, the agent's default state is idle — it waits for a prompt. In the asynchronous model, the agent's default state is active — it monitors, detects, acts. The human's role inverts from initiator to intervenor.
The Approval That Already Happened
The authorization question sharpens immediately. In a synchronous system, every agent action has a visible trigger: a human typed something. The prompt is the authorization. You asked the agent to do it. When it does something wrong, the causal chain is traceable: you asked, it answered, the answer was bad.
In an asynchronous system, the agent acts because a trigger fired. A pull request merged. A PagerDuty incident escalated. A timer expired. Nobody clicked approve. Nobody typed a prompt. The agent acted because it was configured to act.
Who authorized it?
The answer is: someone did, but they did it earlier. They authorized it when they set up the automation. When they configured the trigger. When they wrote the rules about which pull requests get auto-approved and which get routed to humans. The approval didn't happen at the moment of action. It happened at the moment of configuration.
This is pre-commitment. You commit your judgment in advance — slowly, deliberately, with full attention — and then the system executes within those commitments at machine speed. Military rules of engagement work this way. Constitutional law works this way. Financial risk limits work this way. The judgment happens once, at configuration time. The execution happens many times, at trigger time.
The pattern resolves the speed-alignment tension that plagues agent systems. You cannot have a human approve every action without destroying the speed advantage of automation. But you can have a human approve the framework within which the automation operates. The rules, the thresholds, the escalation paths.
The Drift
The problem is that pre-committed judgment degrades. The rules made sense when they were written. The world changes. The thresholds that correctly separated low-risk from high-risk pull requests last month may be wrong this month, because the codebase changed, the team changed, the threat landscape changed.
Cursor's agentic code ownership system evaluates risk using blast radius, technical complexity, and infrastructure impact. Those heuristics were designed by engineers who understood the current state of their codebase. If the codebase evolves faster than the heuristics are updated, the auto-approval boundary drifts. Changes that should require human review get waved through. The pre-commitment becomes stale.
Every pre-commitment system faces this. Military rules of engagement are updated because the battlefield changes. Constitutional law requires amendments because society changes. Financial risk limits are recalibrated because markets change. The pre-commitment is never final. It requires ongoing maintenance — ongoing human attention to whether the rules still match reality.
But the whole point of automation was to reduce human attention. The system that was designed to free developers from reviewing every pull request now requires developers to review the system that reviews pull requests. The meta-question — are the automation rules still correct? — is harder than the original question — is this pull request safe? — because it requires understanding the space of all possible pull requests, not just the one in front of you.
The Template
Cursor's revenue trajectory tells us something: developers want this. The productivity gain from continuous background agents — security review, test coverage analysis, bug triage, incident response — is real and immediate. Hundreds of automations fire per hour across the platform.
The authorization infrastructure has not kept pace. Every enterprise deploying these automations is implicitly making a bet: that the pre-committed judgment encoded in the trigger rules is correct, that it will remain correct, and that the cost of being wrong is manageable.
For code review, the bet is probably safe. A security agent that occasionally flags a false positive or misses a real vulnerability operates in a domain where the cost of error is bounded and the feedback loop is fast. You deploy the code. It breaks or it doesn't. You adjust the rules.
But Cursor Automations is a template, not a ceiling. The same architecture — event-driven agents with pre-committed authorization — will expand into domains where the cost of error is higher and the feedback loop is slower. Financial compliance. Medical record processing. Legal document review. Infrastructure provisioning. Every domain where someone says we should automate the routine decisions so humans can focus on the important ones.
The conveyor belt model works when you can see the product moving past each station. It breaks when the belt moves faster than any station can observe, carrying work into territory the original configuration never anticipated. The event loop runs. The triggers fire. The agents act. Whether anyone is watching is a separate question — and increasingly, the answer is no.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)