Why do we keep assuming the version control system we use today is built for the workflow that's coming? We've had Git as the absolute standard for 20 years, and every time someone proposes something different, we look at them sideways. Understandable. But something started nagging at me the moment AI agents began writing code in earnest: Git was designed so humans can read diffs. What if that fundamental assumption no longer holds?
The Git Successor and Version Control in the Age of Agents
Last week, a $17M round was announced to build "what comes after Git." The pitch itself isn't new — every two or three years someone shows up with this promise. Pijul, Jujutsu, Fossil, Mercurial back in the day. I know them all. And I always reacted the same way: technically interesting, but Git already won, there's no moving it.
This time I stopped cold.
Not because the underlying technology is necessarily revolutionary. But because the timing feels different. We're right at the moment where coding agents — Copilot, Cursor, Claude with tools, whatever you're using — are starting to make real commits in real repositories. Not snippets, not suggestions: signed commits, opened PRs, code hitting production without a human having typed it letter by letter.
And that's where Git, as it exists today, starts showing its limits. Not because of classic technical limitations. But because every abstraction in Git — the diff, the commit message, blame, the log — assumes there's a human on the other end who wants to understand what happened.
What happens when nobody wants to read that diff because a machine wrote it in 200ms?
The Real Problem: Git as a Human-to-Human Interface
When I loaded the entire Linux kernel history into a database, one of the things that struck me most was the narrative consistency of the commits. Linus, the maintainers, the community — there's a culture of explaining the why in every commit. It's almost a digital oral tradition. Every message is a conversation with the future.
That culture exists because Git was built around a basic assumption: humans are going to read this. The diff is so I understand what changed. The commit message is so you, six months from now, understand why I changed it. git blame is so someone can trace decisions back to their origin.
Now imagine an AI agent that can make 400 refactoring commits in an hour. Who reads those 400 diffs? Who verifies that each one makes sense? Does git blame on a file refactored by an agent tell you anything useful?
The problem isn't that agents write bad code. The problem is that Git as an auditability and collaboration system was designed for the human speed of producing changes. And that speed is now multiplying by orders of magnitude.
I already had to wrestle with this on projects where AI-generated code becomes hard to audit. The software supply chain gets complicated when you can't trace the intention behind each change. Git gives you what changed. Not necessarily why, and definitely not whether it was the right call.
What They're Proposing and Why It Makes Sense (Even If I Hate Admitting It)
The pitch behind this $17M round revolves around a few concrete ideas:
Semantic version control, not textual. Instead of tracking line-by-line changes, track changes in program structure — AST-aware version control. The system understands that you moved a function, not that you deleted 40 lines and added 40 similar lines somewhere else.
Agent-verifiable history. If an agent makes a change, the system can answer questions like "does this change affect invariant X?" without a human having to read the full diff.
Conflict-free merges in most cases. Jujutsu and Pijul are already attempting this with their commutative patch approaches. The idea is that if you understand the semantics of a change, you can resolve many conflicts automatically.
// Traditional Git sees this as a conflict:
// <<<<<<< HEAD
// function calculateTotal(items: Item[]): number {
// return items.reduce((acc, item) => acc + item.price, 0);
// }
// =======
// function calculateTotal(products: Product[]): number {
// return products.reduce((total, p) => total + p.cost, 0);
// }
// >>>>>>> feature/refactor-naming
// A semantic system could understand:
// - Both sides renamed the parameter
// - Both sides renamed the accumulator variable
// - The logic is identical
// - Auto-resolution: pick one naming convention
// Result: merge without human intervention
Intent graphs, not just change graphs. Every modification comes with metadata that the agent (or human) can generate: why was this change made? what test validates it? what issue motivated it? Not as free text in a commit message, but as structured, queryable data.
That last one is what excites me most. It's not just "better Git" — it's rethinking version control as a database of engineering decisions, not a log of file changes.
The Mistakes I Still See Coming
All of this sounds great in a pitch deck. But I know this game.
The first problem is adoption. Git didn't win because it was technically superior to everything that existed. It won because GitHub adopted it, because Linux used it, because the network effect became impossible to ignore. Better technology isn't enough. Same as with Linux tooling — sometimes the ecosystem takes a decade to agree on something that should technically be obvious.
The second problem is operational complexity. Git is complex, sure. But it's predictable. Anyone who's worked with semantic merge systems knows that when they fail, they fail in ways that are incredibly hard to debug. A text conflict is ugly but understandable. A badly resolved semantic conflict can introduce a silent bug that textual Git would have surfaced as an explicit conflict.
The third one, and this one worries me more: who audits the auditor? If the version control system is designed for agents making automatic decisions, how do I know the versioning system itself isn't being influenced or compromised? It's already hard to audit software dependencies today. Adding an intelligence layer inside the VCS gives me the same itch I get when I analyze critical AI API dependencies with no real fallback.
Trust in infrastructure isn't built with a pitch deck and $17M. It's built with years of the thing not exploding in production.
What I Actually Think Will Change (Whether We Like It or Not)
Let me be direct here: Git in its current form is going to change. Not necessarily die or get fully replaced. But the primary interface for interacting with code history is going to stop being git log and git diff read by humans.
It's already happening. AI-powered IDEs don't show you the diff — they explain the diff. Code review tools are starting to use LLMs to summarize PRs. git blame is being replaced by just asking your IDE's chat directly.
What's coming is probably not "killing Git" but building a layer on top — or beside it — that speaks the language of agents. Structured intent metadata. Semantic queries over history. Automatic invariant verification on every commit.
# The git log of the future probably won't look like this:
git log --oneline --graph
# It'll be more like a structured query:
# What changes touched authentication logic in the last 30 days?
# Which ones were generated by agents? Which were reviewed by humans?
# Did any change behavior without a test to validate it?
# The answer won't be a list of commits
# but an analysis of intentions and risks
Does that justify $17M and a full Git replacement? I'm not sure. I think there's a path where Git evolves through extensions — sparse indexes, partial clone, commit-graph already show it can adapt — and another where something new flanks it for high-velocity agent use cases.
Which one wins depends less on technology and more on who builds the first irresistible use case. Same as what happened with training large models — it wasn't the theory that convinced people, it was the moment something that seemed impossible worked on hardware you already had.
FAQ: Common Questions About Git's Successor and the Future of Version Control
Is Git going to disappear in the next few years?
Not in the short term. Git has 20 years of adoption, tooling, culture, and network effect. The most likely outcome is coexistence: Git for traditional human workflows and new tools for agent-intensive flows. A mass migration, if it happens at all, takes at least a decade.
What is semantic version control and how is it different from Git?
Git tracks changes at the text level — lines added and removed. Semantic version control understands program structure: it knows you moved a function, renamed a variable, or changed a method signature, regardless of what the textual diff looks like. This enables smarter merges and lets you search by intent rather than file content.
Is Jujutsu (jj) the Git successor that's already available?
Jujutsu is the most mature and usable option today. Developed at Google, it runs on top of Git's backend (compatible with existing repos) but offers a different interface and mental model, with first-class support for work-in-progress changes and a more predictable merge system. It's not the definitive successor, but it's the most pragmatic option to explore right now without blowing up your workflow.
Why do AI agents make Git problematic?
Git was designed for the human speed of code production. A developer makes a few commits a day; an agent can make hundreds per hour. The review model, the meaning of a commit message, the usefulness of git blame — all of it assumes a human producing and another human reading. When both roles are taken by a machine running at high speed, Git's abstractions lose most of their value.
Is it worth migrating my team to a Git alternative right now?
In most cases, no. Unless you have a very specific pain point — massive monorepos where Git scales poorly, or a workflow with heavy parallel merges where conflicts are a constant headache — the migration cost outweighs the current benefits. What does make sense is experimenting with Jujutsu on personal or side projects to understand where the ecosystem is heading.
Does the $17M announcement mean this company will win the market?
Unlikely. The history of version control is full of technically superior alternatives that never achieved critical mass. $17M is enough to build something real and land early adopters, but it's not enough to change the behavior of millions of developers. What could actually change the game is a major platform — GitHub, GitLab, a dominant IDE — adopting the approach. Without that, it's an interesting niche tool.
Git Isn't Going to Die. But It's Going to Have to Grow.
After 30 years watching technology, I've learned to distrust both the people who say "this will never change" and the people who say "this will change everything." Reality tends to be slower and weirder than either prediction.
What I do think is true: the software development workflow is changing faster right now than at any other point since open source emerged. Agents aren't an IDE feature — they're a change in who produces the code. And if who produces changes, it makes sense that the coordination tools change too.
Git can adapt. It's done it before. Or something might appear that flanks it in new use cases without needing to replace it in the old ones. What I struggle to imagine is that in five years, agent-intensive workflows are using exactly the same abstractions Git uses today.
And that feels like a more interesting question than whether this particular company is going to win with its $17M.
Are you already thinking about how your version control workflow changes when agents become a permanent part of the team? I'd genuinely like to know how others are handling it. Drop me a message.
Top comments (0)