DEV Community

Cover image for Above the API: What Developers Contribute When AI Can Code
Daniel Nwaneri
Daniel Nwaneri

Posted on

Above the API: What Developers Contribute When AI Can Code

An AI researcher told me something that won't leave my head:

"If a human cannot outperform or meaningfully guide a frontier model
on a task, the human's marginal value is effectively zero."

Now we have data. Anthropic's study with junior engineers shows: using AI
without understanding leads to 17% lower mastery—two letter grades.

But some AI users scored high. The difference? They used AI to learn,
not to delegate.

The question isn't "Can I use AI?" anymore.

It's "Am I using AI to understand, or to avoid understanding?"

The New Divide

There's a line forming in software development. Not senior vs junior. Not experienced vs beginner.

It's deeper than that.

Below the API:

  • Can execute tasks AI handles autonomously
  • Follows patterns without deep understanding
  • Accepts AI output without verification
  • Builds features fast but can't foresee disasters

Above the API:

  • Guides systems with judgment
  • Knows when AI is wrong
  • Produces outcomes AI can't generate
  • Exercises architectural thinking

The question: which side of that line are you on?

The Divide: What AI Does vs What Humans Still Own

Domain AI Capability (Below) Human Capability (Above) Why It Matters
Code Generation Fast, comprehensive output Knows what to delete AI over-engineers by default
Debugging Pattern matching from training data System-level architectural thinking AI misses root causes across components
Architecture Local optimization within context Big picture coherence AI can't foresee cascading disasters
Refactoring Mechanical transformation of code Judgment on when/why/if to refactor AI doesn't understand technical debt tradeoffs
Learning Instant recall from training Hard-won skepticism through pain AI hasn't been burned by its own mistakes
Verification Cheap domains (does it compile?) Expensive domains (is this the right approach?) AI can't judge "good" vs "working"
Consistency Struggles across multiple files Maintains patterns across codebase AI loses context, creates inconsistent implementations
Simplification Adds features comprehensively Disciplines to reject complexity AI defaults to kitchen-sink solutions

Below the API: Can execute what AI suggests

Above the API: Can judge whether AI's suggestion is actually good

The line isn't about what you can build. It's about what you can verify, simplify, and maintain.

Why AI Makes Juniors Fast But Seniors Irreplaceable

Tiago Forte observed something crucial about AI-assisted development:

"Claude Code makes it easier to build something from scratch than to modify what exists. The value of building v1s will plummet, but the value of maintaining v2s will skyrocket."

The v1/v2 reality:

A junior developer uses Claude to build an authentication system. 200 lines of code, 20 minutes, tests pass, ships to production. Their portfolio looks impressive.

Six months later: the business needs SSO integration. Now they're debugging auth logic they didn't write, following patterns AI chose for reasons they don't understand, with zero architectural context. What should take 4 hours takes 3 days—because they never learned to structure v1 with v2 in mind.

This is the v1/v2 trap in action.

Skills AI Commoditizes (v1 territory):

  • Building greenfield projects
  • Generating boilerplate
  • Following templates
  • Speed and feature velocity

Skills AI Can't Replace (v2+ territory):

  • Debugging existing systems
  • Understanding technical debt
  • Knowing when to refactor vs rebuild
  • Maintaining architectural coherence

Here's the trap: Junior developers are using AI to build impressive v1 projects for their portfolios. But they're never learning the v2+ maintenance skills that actually command premium rates.

As Ben Podraza noted in response to Tiago: "Works great until you ask it to create two webpages with the same formatting. Then you iterate for hours burning thousands of tokens."

Consistency is hard. Context is hard. Legacy understanding is hard.

Those are exactly the skills you learn from working in mature codebases, reading other people's code, struggling through refactoring decisions.

The knowledge commons taught v2+ skills. AI teaches v1 skills.

Guess which one the market will pay for in 2027?

The Architecture Gap

Uncle Bob Martin (author of Clean Code) has been coding with Claude. His observation cuts to what humans still contribute:

"Claude codes faster than I do by a significant factor. It can hold more details in its 'mind' than I can. But Claude cannot hold the big picture. It doesn't understand architecture. And although it appreciates refactoring, it shows no inclination to acquire that value for itself. It does not foresee the disaster it is creating."

The danger: AI makes adding features so easy that you skip the "slow down and think" step.

"Left unchecked, AI will pile code on top of code making a mess. By the same token, it's so easy for humans to use AI to add features that they pile feature upon feature making a mess of things."

When someone asked "How much does code quality matter when we stop interacting directly with code?", Uncle Bob's response was stark:

"I'm starting to think code quality matters even more."

Why? Because someone still has to maintain architectural coherence across the mess AI generates. That someone needs to understand both what the code does AND why it was structured that way.

The Claude Code Reality Check

Since Claude Code and Anthropic's Model Context Protocol (MCP) launched, developers have been experimenting with AI-first workflows. The results mirror Uncle Bob's observation exactly: AI is incredibly fast at implementation but blind to architectural consequences.

What Claude Code excels at:

  • Generating boilerplate quickly
  • Following explicit patterns within single files
  • Maintaining local context for focused tasks
  • Implementing well-defined specifications

Where it fails (by design):

  • Understanding project-wide architecture
  • Maintaining consistency across multiple files
  • Knowing when to slow down and reconsider the approach
  • Foreseeing how today's "quick fix" becomes tomorrow's technical debt
  • Asking "should we even build this?" instead of "how do we build this?"

The tool is powerful. I use it daily. But treating it as autopilot instead of compass leads to the "code pile" Uncle Bob warned about.

This isn't Claude's limitation—it's a fundamental constraint of current AI architecture. As Peter Truchly explained in the comments: "LLMs are not built to seek the truth. They're trained for output coherency (a.k.a. helpfulness)."

An LLM will confidently generate code that compiles and runs. Whether it's the right code—architecturally sound, maintainable, simple—requires human judgment in what Ben Santora calls "expensive verification domains."

That judgment is what keeps you Above the API.

The Skills That Actually Matter

From the discussions in my knowledge collapse article, here's what keeps you Above the API:

1. Architectural Thinking (Uncle Bob's "Big Picture")

  • Knowing when to slow down
  • Seeing consequences AI can't predict
  • Making refactoring decisions with context
  • Balancing technical debt vs new features

2. V2+ Mastery (Tiago's Maintenance Skills)

  • Debugging complex existing systems
  • Understanding why code was written certain ways
  • Maintaining consistency across iterations
  • Choosing between rebuild vs refactor

3. Verification Capability (Ben Santora's "Judge" Layer)

  • Knowing when AI is confidently wrong
  • Distinguishing cheap vs expensive verification domains
  • Building skepticism without becoming paralyzed
  • Testing assumptions, not just accepting outputs

As Ben Santora explained in his work on AI reasoning limits:

"Knowledge collapse happens when solver output is recycled without a strong, independent judging layer to validate it. The risk is not in AI writing content; it comes from AI becoming its own authority."

Cheap verification domains:

  • Code compiles or doesn't
  • Tests pass or fail
  • API returns correct response

Expensive verification domains:

  • Is this architecture sound?
  • Will this scale?
  • Is this maintainable?
  • Is this the right approach?

AI sounds equally confident in both domains. But in expensive verification domains, you won't know you're wrong until months later when the system falls over in production.

4. Discipline to Simplify (Doogal Simpson's "Editing")

In the comments, Doogal Simpson reframed the shift from scarcity to abundance:

"We are trading the friction of search for the discipline of editing. The challenge now isn't generating the code, but having the guts to reject the 'Kitchen Sink' solutions the AI offers."

Old economy: Scarcity forced simplicity (finding answers was expensive)
New economy: Abundance requires discipline (AI generates everything, you must delete)

The skill shifts from ADDING to DELETING. From generating to curating. From solving to judging.

5. Domain Expertise (John H's Context)

In the comments, John H explained how he uses AI effectively as a one-man dev shop:

"I can concentrate on being the knowledge worker, ensuring the business rules are met and that the product meets the customer usability requirements."

What John brings:

  • 3 years with his application
  • Deep customer knowledge
  • Business rules understanding
  • Can verify if AI output actually solves the right problem

John isn't using AI as autopilot. He's using it as a force multiplier while staying as the judge.

The pattern: Experienced developers with deep context use AI effectively. They can verify output, catch errors, know when to override suggestions.

The problem: Can juniors learn this approach without first building the hard-won experience that makes verification possible?

The Anthropic Study: Using AI vs Learning With AI

While writing this piece, Anthropic published experimental data that
validates the Above/Below divide.

In a randomized controlled trial with junior engineers:

  • AI-assistance group finished ~2 minutes faster
  • But scored 17% lower on mastery quiz (two letter grades)
  • "Significant decrease in mastery"

However: Some in the AI group scored highly while using AI.

The difference? They asked "conceptual and clarifying questions to
understand the code they were working with—rather than delegating or
relying on AI."

This is the divide:

Below the API (delegating):

"AI, write this function for me" → Fast → No understanding → Failed quiz

Above the API (learning with AI):

"AI, explain why this approach works" → Slower but understands → Scored high

Speed without understanding = Below the API.

Understanding while using AI = Above the API.

The tool is the same. Your approach determines which side you're on.

[Source: Anthropic's study, January 2026]

The Last Generation Problem

In the comments, Maame Afua revealed something crucial: she's a junior developer, but she's using AI effectively because she had mentors.

"I got loads of advice from really good developers who have been through the old school system (without AI). I have been following their advice."

The transmission mechanism: Pre-AI developers teaching verification skills to AI-era juniors.

Maame can verify AI output not because she's experienced, but because experienced devs taught her to be skeptical. Her learning path:

  1. Build foundation first (books, docs, accredited resources)
  2. Use AI as assistant, not primary learning tool
  3. Verify against authoritative sources
  4. Never implement what she can't explain

But here's the cliff we're approaching:

Right now, there are enough pre-AI developers to mentor. In 5-10 years, most seniors will have learned primarily WITH AI.

Who teaches the next generation to doubt? Who transfers verification habits when nobody has them?

We're one generation away from losing the transmission mechanism entirely.

Maame is lucky. She found good mentors before the window closed. The juniors starting in 2030 won't have that option.

How People Learn Verification

Two developers in the comments showed different paths to building verification skills:

The Hard Way (ujja)

ujja learned "zero-trust reasoning" through painful experience:

"Trusted AI a bit too much, moved fast, and only realized days later that a core assumption was wrong. By then it was already baked into the design and logic, so I had to scrap big chunks and start over."

His mental model shifted:

  • Before: "Does this sound right?"
  • After: "What would make this wrong?"

He now treats AI like "a very confident junior dev - super helpful, but needs review."

His insight: "I do not think pain is required, but without some kind of feedback loop like wasted time or broken builds, it is hard to internalize. AI removes friction, so people skip verification until the cost shows up later."

The Deliberate Way (Fernando)

Fernando Fornieles recognized the problem months ago and took action without waiting to get burned:

  • Closed private social media accounts
  • Migrated to fediverse
  • Built home cloud server (Nextcloud on Raspberry Pi)
  • Actively avoiding platform "enshittification"

He's not learning through pain. He's acting on principles.

The question: Can we teach ujja's learned skepticism without the pain? Can we scale Fernando's deliberate action?

Or does every junior need to scrap a week's work before they learn to verify AI output?

What the Knowledge Commons Taught

Stack Overflow debates taught architecture. Someone would propose a solution, others would tear it apart, consensus would emerge through friction. That friction built judgment.

Code review culture taught "slow down and think." You couldn't just ship it - someone would ask "why this approach?" and you'd have to justify architectural decisions.

Painful bugs taught foreseeing disaster. You'd implement something that seemed fine, it would blow up in production, you'd learn to see those patterns early.

Legacy codebases taught refactoring judgment. You'd maintain someone else's decisions, understand their constraints, learn when to preserve vs rebuild.

All of this happened in public. On Stack Overflow. In code review comments. In GitHub issues. In conference talks.

AI assistance happens in private. Individual optimization. No public friction. No collective refinement.

The skills that keep you Above the API were taught by the knowledge commons we're killing.

Practical Actions

If You're Junior/Early Career:

Seek pre-AI mentors actively

  • Find developers who learned before ChatGPT
  • Ask them to review your AI-generated code
  • Learn their skepticism patterns

Work in mature codebases

  • Don't just build greenfield projects
  • Contribute to established open source
  • Learn from technical debt decisions

Document your reasoning publicly

  • Write about WHY you chose approaches
  • Publish debugging journeys, not just solutions
  • Contribute to the commons you're consuming

Build verification habits explicitly

  • Always check AI output against docs
  • Test assumptions, don't just ship
  • Learn to recognize "confident wrongness"

Treat AI like ujja does

  • "Very confident junior dev"
  • Super helpful, but needs review
  • Ask "what would make this wrong?" not "does this sound right?"

If You're Senior/Experienced:

Mentor explicitly

  • Teach verification, not just syntax
  • Share your skepticism patterns
  • Explain architectural thinking out loud

Preserve architectural knowledge

  • Document WHY decisions were made
  • Publish architecture decision records
  • Write about the disasters you foresaw

Contribute to commons deliberately

  • Answer questions on Stack Overflow
  • Write detailed technical blog posts
  • Open source your reasoning, not just code

Make "slow down and think" visible

  • Show juniors when you pause to consider
  • Explain the questions you ask AI
  • Demonstrate the editing/simplification process

The Uncomfortable Questions

The AGI Wild Card

In the comments on my knowledge collapse article, Leob raised the ultimate question: what if AI achieves true invention?

"Next breakthrough for AI would be if it can 'invent' something by itself, pose new questions, autonomously create content, instead of only regurgitating what's been fed to it."

If that happens, "Above the API" might become irrelevant.

But as Uncle Bob observed: "AI cannot hold the big picture. It doesn't understand architecture."

Peter Truchly added technical depth to this limitation:

"LLMs are not built to seek the truth. Gödel/Turing limitations do apply but LLM does not even care. The LLMs are just trained for output coherency (a.k.a. helpfulness)."

Two possible futures:

Scenario 1: AI remains sophisticated recombinator
Knowledge collapse poisons training data. Model quality degrades. The Above/Below divide matters enormously. Your architectural thinking and verification skills remain valuable for decades.

Scenario 2: AI achieves AGI and true invention
Knowledge collapse doesn't matter because AI generates novel knowledge. But then... what do humans contribute?

Betting everything on "AGI will save us from knowledge collapse" feels risky when we're already seeing the collapse happen.

Maybe we should fix the problem we KNOW exists rather than hoping for a breakthrough that might make everything worse.

Does Software Even Need Humans?

Mike Talbot pushed back on my entire premise:

"Why do humans need to build a knowledge base? So that they and others can make things work? Who cares about the knowledge base if the software works?"

His argument: Knowledge bases exist to help HUMANS build software. If AI can build software without human knowledge bases, who cares if Stack Overflow dies?

He used a personal example:

"I wrote my first computer game. I clearly remember working on a Disney project in the 90s and coming up with compiled sprites. All of that knowledge, all of that documentation, wiped out by graphics cards. Nobody cared about my compiled sprites; they cared about working software."

His point: Every paradigm shift makes previous knowledge obsolete. Maybe AI is just the next shift.

My response: Graphics cards didn't train on his compiled sprite documentation. They were a fundamentally different approach. AI is training on Stack Overflow, Wikipedia, GitHub. If those die and AI trains on AI output, we get model collapse not paradigm shift.

Mike's challenge matters because it forces clarity: Are we preserving human knowledge because it's inherently valuable? Or because it's necessary for AI to keep improving?

If AGI emerges, his question becomes more urgent. If it doesn't, preserving human knowledge becomes more critical.

What You Actually Contribute

Back to the original question: "What do you contribute that AI cannot?"

You contribute verification. AI solves problems. You judge if the solution is actually good.

You contribute architecture. AI writes code. You see the big picture it can't hold.

You contribute foresight. AI optimizes locally. You prevent disasters it doesn't see coming.

You contribute context. AI has patterns. You have domain expertise, customer knowledge, historical understanding.

You contribute judgment in expensive verification domains. AI excels where verification is cheap (does it compile?). You excel where verification is expensive (will this scale? is this maintainable? is this the right approach?).

You contribute simplification. AI generates comprehensive solutions. You have the discipline to delete complexity.

You contribute continuity. AI is stateless. You maintain coherence across systems, teams, and time.

But here's the uncomfortable truth: none of these skills are guaranteed.

They're learned. Through friction. Through pain. Through public struggle. Through mentorship from people who learned the hard way.

If we kill the knowledge commons, we kill the training grounds for Above-the-API skills.

If we stop mentoring explicitly, we lose the transmission mechanism in one generation.

If we optimize purely for velocity, we lose the "slow down and think" muscle.

Staying Above the API isn't automatic. It's a choice you make every day.

Choose to verify, not just accept.

Choose to simplify, not just generate.

Choose to foresee, not just react.

Choose to mentor, not just build.

Choose to publish, not just consume.

The API line is real. Which side will you be on?


This piece was built from discussions with developers working through these questions publicly. Special thanks to Uncle Bob Martin, Tiago Forte, Maame Afua, ujja, Fernando Fornieles, Doogal Simpson, Ben Santora, John H, Mike Talbot, Leob, Peter Truchly, and everyone else thinking through this transition.

What skills do you think will keep developers Above the API? What am I missing? Let's figure this out together.

Part of a series:

Top comments (4)

Collapse
 
shalinibhavi525sudo profile image
shambhavi525-sudo

This is a brilliant mapping of the 'Post-AI' reality.

The distinction between Cheap vs. Expensive Verification is the real signal. We are entering a 'Velocity Trap' where juniors look like seniors because they can clear syntax hurdles at 10x speed, but they haven't spent time in the trenches where you live with the consequences of a bad architectural pivot for three years.

As you noted, the skill has shifted from generation to curation. In the old world, the 'cost' of code was the effort to write it. In the new world, the cost is the cognitive load required to justify keeping what the AI spit out.

The 'Last Generation Problem' is the real existential threat. If we stop learning through the 'friction of search' and the 'pain of refactoring,' we risk becoming pilots who only know how to fly in clear weather.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

"velocity trap" is perfect framing.
youve nailed the illusion: juniors clearing syntax at 10x speed LOOK like seniors but havent lived with architectural consequences.

and your cost shift: "effort to write → cognitive load to justify keeping" captures the economics exactly.

doogal called this "discipline to edit" - abundance requires different skill than scarcity.

"pilots who only fly in clear weather" - this is the metaphor ive been looking for. ai training without friction = clear weather only.

when turbulence hits (production bugs, scale issues, technical debt), they have no instrument training.

anthropic just proved this: juniors using AI finished faster but scored 17% lower on mastery. velocity without understanding

appreciate you synthesizing this so clearly - "velocity trap" goes in the framework collection.

Collapse
 
viki_vamp_b202851f807d604 profile image
Viki Vamp

Thank you very much for this article and explaining in simple words complex thoughts about AI and its effects on the dev community and not just only on the coding process.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

appreciate you reading. the goal was making these abstract patterns concrete and relatable.