We've always designed APIs for humans. A well built API means obsessing over naming conventions, RESTful patterns, and clear documentation because the goal is simple: make systems easy for developers to understand. But AI is changing who the consumer of software is, and developers are asking whether the rules we've followed for decades still hold up.
When the primary user of an API is an AI system that reads documentation, adapts to unfamiliar patterns, and experiments when something fails, maybe consistent APIs and clean abstractions don't matter anymore.
Everything in me wants to reject this concept. My gut instinct is to say "my APIs need to be pretty or I'll die".
I've been thinking about this a lot lately, and unfortunately I think good arguments are made from both angles on this. Let's walk through both sides.
The Case for "Abstractions Don't Matter Anymore"
Some developers believe AI will reduce the importance of traditional abstractions. I've heard this take a lot.
LLMs are extremely good at pattern recognition. They can read documentation, inspect responses, experiment, and adapt when things fail. From this perspective, messy systems don't seem like a big problem. A human might struggle with inconsistent naming or poor documentation, but an AI can simply figure it out through trial, error, and being smarter than me.
So maybe if abstractions exist to make code understandable to humans, and the primary consumer is no longer human, then the old rules don't apply.
In practice, you can build agent systems that work around poorly designed APIs. Say you're integrating with an API where half the endpoints return errors as HTTP status codes and the other half always return 200 with an error field buried in the response body.
The agent pulls the docs, writes the code, and it looks reasonable. Then it runs the tests and it breaks because the code is checking status codes on an endpoint that never returns them.
The agent reads the error, adds response body parsing, and tries again. Maybe it over-corrects and starts modifying the way it's handling status codes everywhere, breaking a different call. So, it adjusts again. Then finally, the third try works. These systems exist today and they get there eventually.
As models continue to improve, API design will matter less and less as the models can brute force their way to working code.
The Case for "Abstractions Still Matter"
Now for the other side of the argument.
A human developer interacts with an API occasionally. An AI system like a coding agent might interact with it hundreds of times in a single session. When something is poorly designed, the problems compound fast in the form of unnecessary retries, token-heavy debugging loops, and ugly workarounds.
I ran into this with an API that had inconsistent naming across its endpoints. I was debugging an issue with an app I was building, and my coding agent kept thinking it had identified the issue because a parameter name for an API I was using didn't align with historical patterns for this type of API. That wasn't the issue at all, it was completely irrelevant, but the agent kept getting hung up on it.
Every time I debug something that uses this specific API my coding agent always says "I found it! The parameter name should be X instead of Y!" Then it changes it and deploys again and it doesn't work because that wasn't the issue. It kept making the same wrong assumption across sessions.
Unlike a human who hits a weird error and remembers it next time, LLMs are stateless by default. Every new session starts fresh, and agents can spin up tons of sessions in a single workflow. Each of which will run into the same problem.
Every ambiguity in an API has a token cost and poor API design has direct financial consequences in a way that wasn't true when the only cost was developer frustration.
And another thing I've noticed: if you watch a coding agent work through a problem like this, it often gives up and tries a different approach entirely after a few tries. It'll swap libraries, use a different endpoint, or cobble together a workaround using some other approach. That's the AI doing exactly what it should do, adapting.
From the developer perspective, I don't always like the workarounds the AI chooses. Sometimes one unclear API makes my AI think it needs to redesign my entire component. Other times it bypasses the API in a way where it looks like it's working but it's actually relying on some weird hardcoded values somewhere.
And from your perspective of an API owner, the AI just decided not to use your API. Your messy design just cost you a new user.
The Human Compatibility Problem
There's another angle to this too. The abstractions we built for humans are now embedded in how AI systems learn.
Modern software ecosystems contain decades of common coding conventions. These were originally created to help humans understand systems. But those same patterns now appear throughout model training data, and that has consequences.
When you name your endpoint /api/v2/users/{id}, the model has seen that pattern millions of times. It knows what to expect. When you name it /backend/person/fetch?identifier={id}, you're fighting against the weight of its training. The model can learn your pattern, but there's friction.
Coding assistants are increasingly abstracting away the act of writing syntax from developers which is great until you need to peek under the abstraction.
If an agent generates code using unfamiliar patterns or unconventional APIs, a human still has to review it, debug it, and maintain it. We wouldn't want agents writing assembly language even if it ran faster, because most of us can't read it. The same logic applies to API conventions. Familiar patterns keep the code understandable for the humans who still have to live with it.
The patterns we created to help humans are now baked into how AI understands software, and that path dependence matters in both directions. Breaking conventions costs you in AI effectiveness and in human readability.
You Can Engineer Around It (If You Can Afford It)
Writing this blog turned my question from "does API design and thoughtful abstraction matter anymore?" into "how much money do you have?"
Every time an AI system has to figure out how something works, that's tokens being consumed and a potentially hacky workaround making its way into your code base. The adaptability is real, but anyone who's been ripping Claude Opus 4.6 with the 1 million token context window using agent teams knows that this is not free.
You can throw tokens at bad abstractions, build sophisticated systems to work around them, add layers of verification, validation, and correction. Multiple agents checking each other's work. Memory and caching layers to avoid repeated discovery. But wouldn't it be nice to just get it right the first try? Clean abstractions and good API design can give that to you.
Ideally you have both things in place. Clean APIs, meaningful abstractions, and clear documentation means the agent has minimal friction and looping. Then add in enough scaffolding around the agent to recover when an API it's using doesn't have all of those things. That combination reduces the cost of code generation, keeps code human readable and debuggable, and the adaptability is still built in.
Top comments (1)
This is such great food for thought! I wonder if we'll start seeing people look at high rates of 4XX/5XX errors in their APIs as a proxy for agent "bounce rate"