"If you use AI, you're not a real developer."
Same energy as every gatekeeping panic before it:
Stack Overflow? Not a real programmer.
Framework...
For further actions, you may consider blocking this person and/or reporting abuse
The real threat is the race to the bottom.
Companies have to lower their prices because business people expect software development to go faster with AI. I was talking to someone how is running a company, and he said before AI you knew when people submitted offers well below the market price the quality was lackluster. But with AI those prices are becoming the norm.
The problem I saw with that scenario is, how will the customers know the quality of the software is good or bad? It can be a company that is working with AI, but it could also be a vibe prompter.
The other problem with lowering prices is that IT companies will not be able to pay their people as much as they used to. People with skills will go away when they are not payed their worth. So it is not only no new people, it is going to be an intelligence drain from the sector too.
This feels like the days people asked developers to make custom websites for the exposure. Exposure doesn't pay the bills.
When someone does something in your house, nobody is thinking about paying with putting up a sign.
@xwero This is a brutally honest extension.thanks for bringing the economic reality front and center.
The race to the bottom on pricing is the part that scares me most too AI enables "good enough" slop at rock-bottom rates, clients can't reliably spot the difference (no more credential signals), and skilled people bail when pay doesn't match the value they deliver. It's not just fewer juniors entering it's experienced talent draining out, leaving even less mentorship and judgment in the ecosystem.
Your "exposure" parallel hits home. we've seen this undervaluation cycle before, and it never ends well for quality or sustainability.
Have you seen agencies or freelancers starting to push back? Or is it still full steam ahead on the price war?
I don't know where it is going. I'm not at the business side.
I brought it up to make a point.
I think this is really well-put.
"Race to the bottom" issues should be resolved by market dynamics in the long run where the bottom turns out to not be feasible β but is there a tight enough feedback loop to avoid hitting rock bottom before it's too late?
@ben Thanks,appreciate you jumping in.
I agree markets should punish "rock bottom" quality eventually (clients notice outages, tech debt, vendor churn), but the feedback loop is worryingly slow in software. Cheap AI-slop can ship fast, look good in demos, and rack up users before the cracks show by then the damage (burned trust, talent exodus) is already done.
The question is whether we can accelerate that loop somehow better transparency tools, reputation signals, or just more public "this failed because of unchecked AI" stories. What do you think could tighten the feedback before too many hit bottom?
The junior pipeline problem is the one that keeps me up at night. I'm a solo dev shipping two SaaS products right now, and I use AI constantly β but I can only use it effectively because I spent years debugging garbage code at 2am, reading stack traces nobody else wanted to touch, and learning why certain architectural decisions blow up at scale.
If we skip that painful apprenticeship phase for the next generation, we're basically training pilots who've never experienced turbulence. They'll be fine until they're not, and when they're not, nobody will know how to land the plane.
The accountability framing is spot on though. "Who gets paged at 3am" is a much better filter than "who wrote this code." The tool doesn't matter β the ownership does.
@egedev This hits hard.the junior pipeline / apprenticeship skip is what keeps me up too. Your "pilots who've never experienced turbulence" metaphor is perfect. AI can generate smooth flights, but when turbulence hits, judgment from hard-earned scars is what lands the plane.
The 3am page filter ("who owns it?") over "who wrote it" is the accountability shift we need.
As someone shipping with AI daily, how are you thinking about mentoring or onboarding the next wave without the old grind? Or is the model fundamentally changing?
Oh my God. You wrote the post with AI, you're also replying comments with AI too? π«
The thing is that the AI system now knows how to deal with turbulence. The pilots have experienced turbulence, but when it happens, the tell the AI to deal with it and the AI does, most of the time.
That gives the pilot enough experience with using the AI to debug problems. Then when something really crazy comes up, the pilot would know how to use AI to debug the problem.
I'm not sure, but it seems to me that we're getting hung up on the wrong aspect of AI in software development. Instead of worrying about it replacing our jobs, we should be looking at how it's changing the landscape of what we do. I've always been fascinated by the traditional apprenticeship model, and the idea that AI might eliminate that - and the chance for junior developers to hone their skills - has me really thinking.
@itsugo Thanks, you're hitting the exact thing that keeps me up at night.
The traditional apprenticeship (grinding through bugs, reading stack traces at 2am, learning why decisions blow up at scale) is how judgment gets built. If AI lets juniors ship fast without that grind, we risk a generation that can prompt but can't debug or trade off under real pressure.
I'm not saying AI is all bad . it can accelerate learning if used right but skipping the painful "why" phase feels like a massive loss. That's why I'm writing the next piece on rebuilding the ladder so juniors still get those scars, just differently.
What part of the apprenticeship model do you think is hardest to replace or recreate with AI in the mix?
I've been thinking a lot about the tradeoff between speed and quality in coding, and I'm starting to see a difference in ideologies between getting code out the door quickly (even if it doesn't work as expected) and truly understanding how it works. When I was an apprentice, I spent a lot of time breaking things and trying to figure out why they wouldn't work, which is where I picked up the most valuable lessons - how to fix them, and what to do differently next time. Also if I can't explain how I built something in simple words then I can't take the credit for building it.
The key part of the apprenticeship model that's hard to replicate is the chance to make the kinds of mistakes people used to make without AI guidance. Now that AI is rapidly improving, I think the mistakes we make will be different, and the skills we need to develop will be distinct as well.
@itsugo Thanks, the apprenticeship grind (breaking things, fixing them, explaining simply) is exactly where the real lessons live. AI shortcuts the "make mistakes without guidance" phase, so the mistakes we do make will be different probably subtler and harder to spot.
That shift in what skills we need to build is huge. The next piece is digging into how we recreate that learning loop in an AI world. so juniors still get the scars, just not the old way.
What do you think is the one apprenticeship lesson that's hardest to replicate now?
The point about "we can't review code faster than AI generates it" is the one that deserves its own article. That's the actual operational crisis nobody's staffing for. I've been benchmarking AI-generated code for security vulnerabilities and the volume problem is real β when 65-75% of AI-generated functions ship with security issues, the bottleneck isn't writing code, it's the judgment layer between generation and merge. Accountability frameworks won't work if the people accountable can't actually evaluate what they're approving.
@ofri-peretz Thanks, the "can't review faster than it generates" point is the operational crisis in plain sight.
65β75% of AI functions with security issues is a brutal stat. the bottleneck isn't generation anymore; it's judgment and evaluation before merge. Accountability breaks if the accountable people can't actually assess what's being approved.
I've seen similar volume problems in smaller teams; it's not sustainable without new rituals or tools. What approaches are you experimenting with to make the judgment layer scale when volume explodes?
The table of what we said was impossible vs what happened is a great reference. The accountability framing is the key insight here. I'm building AI-powered tools and the hardest problem isn't getting the AI to generate good code β it's designing systems where humans stay meaningfully in the loop. When AI writes 80% of a PR, the review process needs to fundamentally change, not just speed up. Great piece.
@vibeyclaw Thanks,glad the table and accountability framing landed for you.
You're exactly right. generation is the easy(ish) part now. The hard engineering is redesigning the loop so humans aren't just rubber-stamping 80% AI PRs. we need review processes that force real interrogation, catch confident hallucinations, and preserve ownership.
When you've got AI writing most of a PR, what changes have you made (or are experimenting with) to keep humans meaningfully in control? More structured checklists, mandatory "why this decision" notes, separate verification passes? Curious what actually works in practice.
Great question. Here's what's actually worked for us:
Mandatory "intent annotation" β before every AI-generated PR, the developer writes a 2-3 sentence explanation of why this change exists and what tradeoffs were considered. Forces you to think beyond "the AI suggested it."
Differential review β instead of reviewing the full PR, we diff what the AI generated against what a human would have written (even just mentally). The gaps are where bugs hide.
"Explain this line" challenges β during review, randomly pick 3-4 lines and ask the PR author to explain them without looking at the AI conversation. If they can't, that's a red flag.
The biggest insight: the review process needs to be adversarial toward confident-looking code, not just syntactically correct code. AI writes very convincing wrong answers.
@vibeyclaw This is fantastic. thanks for sharing what actually works.
"Intent annotation" forcing the 2-3 sentence "why + tradeoffs" is brilliant. it turns passive acceptance into active thinking. The "explain this line" challenge is ruthless in the best way; if they can't defend it without the AI log, it's not owned.
The adversarial mindset toward confident code is the killer insight. AI excels at plausible answers; humans have to be the skeptics.
Have you seen any pushback from devs on these rituals (e.g., "too much ceremony"), or do they buy in once they see the bugs it catches?
Great question. Honestly, some initial eye-rolling at the "ceremony" β especially from senior devs who feel it slows them down. What flipped the mindset was showing them their own bug rate data: the devs who adopted intent annotation had 40% fewer production incidents over 3 months. Hard to argue with that.
The key was making it lightweight. We don't require a novel β just 2-3 sentences: what's the intent, what tradeoff did you accept, what would you watch for in prod. Takes 30 seconds per PR. The "explain this line" challenge we only do in code reviews, not every commit, so it doesn't feel like a tax on velocity.
Biggest win was reframing it as "this protects you when the AI-generated code breaks at 3am and someone asks why we shipped it." Self-interest is a powerful motivator.
When you say these things, are you suggesting that backend teams aren't needed because a single project manager can do it all, and that ten years of experience is now worthless? You feel like AI really writes code at senior engineers?
And the conclusion here is that, deep down, we all believe this as well? And so the only way to protect ourselves is by coming out and saying AI isn't as good as a senior, trying to convince people it's worse than what it really is? Hence, we're gatekeeping?
There is another explanation - maybe AI really isn't at that level yet. What it can do is impressive and many people find it helpful to integrate it into their workflow, but it's not senior level yet. Which means it's not gatekeeping for people to avoid an over reliance on AI code, or for open source projects to want to avoid a flood of low quality AI generated PRs, etc, it's just them being realistic with how things are today. Perhaps things will change in the future, like you said, but for now, AI just isn't there. Or, at the very least, it would be good to accept that people honestly believe that AI isn't there so when they limit the use of AI, it's not because they're trying to be dishonest and gatekeep, but because they honestly feel that code quality would be worse if there weren't limits.
@thescottyjam Good pushback, I agree, a lot of the caution around AI PRs or over-reliance isn't gatekeeping; it's realistic about current limits. Claude can generate solid code, but it's not refactoring legacy systems with deep context or making tradeoffs under real constraints yet.
I'm not saying AI is already senior-level (it's not), just that the panic often focuses on "it'll replace us" instead of "it shifts what we gatekeep." The honest belief that "it's not there yet" is valid. it's why accountability and judgment stay human for now.
What limits have you seen most clearly in practice that keep it from senior territory?
To be honest, I don't love comparing LLMs to Junior or Senior programmers, since it's extremely different in capabilities from either. It knows a lot about a huge range of topics, more than any single programmer could ever hope to know, and it knows how to solve straightforward problems or problems that have been solved many times over much faster than any individual programmer could ever hope to do.
But that's about it.
It's really bad a assessing the pros and cons of different approaches in the context of the company's goals, problem solving, coming up with alternative approaches, asking for clarification, learning, prioritizing what's most important, coming up with ideas to improve the product, isolating nasty bugs, and so forth.
Sure, AI can do some of this stuff to a small extent when explicitly prompted, but seniors tend to be capable of doing most or all of it fairly well. Juniors of course won't be as skilled in these areas, but they still generally do better than AI - a junior's more likely than AI to be able to figure out how to isolate a nasty bug, come up with good ideas to improve the product, and so forth.
So an AI's programming skill is probably at junior level (maybe a little higher), assuming the junior is aware of a really wide range of topics, but the other important skills just aren't there yet.
@thescottyjam I agree, AI's breadth is insane for known/solved problems, but it falls flat on context-aware tradeoffs, bug isolation, prioritization, and product ideas. That's why I see it as a force multiplier for engineers who already have those skills, not a replacement for them.
The "junior-level on syntax, but missing senior judgment" framing makes a lot of sense. it explains why the panic feels overblown to some and terrifying to others. The gap in those "soft" engineering skills is what keeps humans essential.
What do you see as the biggest current weakness in AI for those contextual/prioritization areas?
i totally agree with your point, but the real problem is that juniors are using AI to write all the code without understanding the logic behind it. As they are not aware of the consequences that they will face if they will not learn the fundamentals. This will create a significant gap between junior developers and senior developer.This leads that a junior developer will never become as skilled as senior developer over the time. the question is Who will fill the gap?
@rohit_giri Thanks yeah, that's the core junior trap I'm worried about too. AI lets them ship CRUD fast, but skips the "why this fails at scale" scars that turn juniors into seniors.
The gap widens if we don't force fundamentals somehow. My next piece is digging into exactly that. how do we rebuild the ladder so juniors still learn logic/consequences, not just prompts?
What do you think the first "must-learn" fundamentals are that AI can't shortcut effectively?
Nailed the real threat. AI doesn't commoditize coding β it exposes gatekeeping around system design + judgment.
The junior trap: Can build CRUD apps. Senior unlock: Can design sharding, state machines, recovery systems that don't explode at scale.
2026 reality: AI handles 80% syntax/boilerplate. The 20% architecture/tradeoff decisions = 10x salary gap.
AI amplifies engineers who already think in systems. Gatekeepers panic because they can't articulate "why" anymore.
Perfect reframing. The winners are already building π
@charanpool Thanks. that summary is sharper than what I wrote.
The crud vs sharding/recovery distinction is exactly the junior-to-senior unlock, and the 80/20 split driving salary gaps is spot on. AI is forcing everyone to prove they can still answer "why this way and not that way" when the code is mostly generated.
Glad the reframing landed and yeah, the people already thinking in systems are quietly pulling ahead. What part of the architecture/tradeoff side do you see AI struggling with most right now?
@dannwaneri Thanks! Appreciate the shoutout.
AI's architecture blindspot: Contextual tradeoffs requiring business/team constraints. It spits out "fast" or "modular" code but can't weigh tomorrow's scale vs today's MVP deadlines, org topology, or infra costs.
Live example: Sharding decisions. AI generates horizontal partitioning fine, but misses when vertical microservices + read replicas beat it for your 10M QPS + 5 engineers reality.
Systems thinkers thrive by asking "under what constraints does this fail?"βquestions LLMs dodge. The gap widens.
What's your take on AI governance patterns that actually stick?
@charanpool Thanks.
Spot on. AI ignores the messy constraints (deadlines vs scale, team size, infra costs). Your sharding example is perfect. The real gap is asking "under what constraints does this fail?". LLMs dodge that hard.
What's one governance pattern that's actually helped you force those contextual tradeoffs?
Really thoughtful piece. The distinction between 'programming' and the 'social architecture' around it is a masterclass in naming the problem. We aren't mourning the loss of writing code; some are mourning the loss of the exclusivity of being a coder.
Iβm particularly interested in your questions about accountability. If 'code review theater' is dead, weβre forced into a world where we have to be much more honest about our technical debt. AI makes it incredibly easy to build a 'house of cards' that looks like a mansion. My big worry: as the volume of code explodes, will our 'judgment' be able to keep up, or will we just end up with 'AI reviewing AI' until the whole system becomes a black box? The gap you mentionedβknowing what's worth buildingβis the only high ground left.
@shalinibhavi525sudo Thanks, "mourning the exclusivity" is exactly it. The social architecture was never just about code quality; it was about who gets to claim expertise.
Your house-of-cards point is sharp. AI makes beautiful mansions easy, but debt explodes, volume overwhelms, and "AI reviewing AI" risks a black-box mess. The only defensible ground left is judgment on "what's worth building" and honest ownership of the mess.
How do you see teams avoiding that black-box trap right now? More human review layers, or something else?
The 'AI reviewing AI' black-box trap is exactly the nightmare scenario. Itβs like having two people who don't speak the language trying to proofread each otherβs translationβeventually, you just get a beautiful-looking hallucination.
From what Iβm seeing, the teams avoiding the 'house of cards' trap aren't just adding more human review; theyβre actually changing the level of the review. Instead of nitpicking lines of code (which is where we get overwhelmed), the focus is shifting to Architectural Audits. >
Weβre seeing a move toward Spec-Driven Developmentβwhere a senior devβs job is to ruthlessly define the constraints, the why, and the edge cases before the AI ever touches a key. If the human owns the blueprint and the accountability, the AI just becomes a high-speed bricklayer.
Itβs a massive shift for juniors, though. We have to figure out how to teach them judgment when they aren't spending years in the syntax trenches anymore. I suspect the defensible ground isn't just knowing what to build, but being the person who can spot when the mansion the AI built doesn't actually have a foundation.
Itβs a wild time to be in the room, isn't it? Iβm still hopeful, but Iβm keeping my accountability hat on tight!
@shalinibhavi525sudo This is brilliant. the "two people proofreading translations in a language they don't speak" analogy nails the black-box nightmare.
Shifting to architectural audits + spec-driven development makes total sense: human owns the blueprint (constraints, why, edge cases), AI lays bricks fast. That keeps the foundation solid instead of building pretty houses of cards.
The junior piece is exactly about recreating judgment without the old syntax trenches. how do we teach spotting "no foundation" when AI makes the mansion look flawless?
It's definitely a wild time. Keeping the accountability hat on tight feels like the only sane move right now.
This perspective exposes a critical shift in our industry. AI tools indeed challenge our perception of what constitutes a "real developer". The real questions revolve around accountability and oversight when using AI. If we trust AI to generate code, who is responsible for its maintenance, especially in production? As we move towards increasingly integrated AI systems, we must prioritize strong architectural decisions and robust practices. This isn't just about maintaining control; itβs about ensuring our systems remain reliable and comprehensible as we embrace these new technologies. π€
@theminimalcreator Thanks,exactly, the shift isn't about generation; it's about keeping humans meaningfully responsible for reliability and comprehension.
When AI generates 80% of the code, the architectural decisions and oversight become the only thing keeping systems from becoming unmaintainable black boxes. Prioritizing those "strong practices" is the real unlock.
What's one oversight pattern you've found most effective when integrating AI-generated code into production systems?
This piece cuts through a lot of noise, and I appreciate that it refuses the easy villain narrative.
What AI seems to threaten isnβt βsoftware developmentβ so much as unexamined comfort. The parts of the job built on repetition, ceremony, and accidental gatekeeping feel exposedβnot because AI is magical, but because they were never the essence of the craft to begin with.
The irony is that good engineers already work with abstraction, automation, and leverage. AI is just another layerβlouder, faster, and more visibleβforcing the question many avoided: what value do I actually add beyond syntax and recall?
Gatekeeping panic often reads less like fear of replacement and more like fear of re-evaluation. Thatβs uncomfortable, but not new. Every major shift in tooling has asked the same thing.
Strong article. Clear-eyed without being dismissive.
@canabady Thanks, "unexamined comfort" is a perfect way to put it.
AI isn't threatening the craft itself; it's exposing the parts that were never the core repetition, ceremony, accidental prestige. Good engineers have always lived on abstraction and leverage; this is just a louder version forcing the "what do I actually add?" question.
The panic often feels like fear of that re-evaluation more than fear of replacement. Glad the piece cut through the noise for you.
The race to the bottom is real, but I think there's a counter-trend emerging: AI-augmented consultants who charge more because they deliver 10x the output.
The threat isn't AI replacing devs β it's the gap between devs who treat AI as a force multiplier vs those who don't use it at all. The former can do architecture, implementation, testing, and deployment in the time it used to take just for the spec.
The accountability question @nedcodes raised is key. Ownership doesn't disappear because the tool changed β it concentrates. One senior + AI stack can own an entire system, but they need the judgment to know when the AI is confidently wrong.
We're seeing this play out in consulting already. Solo operators with strong AI workflows are outcompeting 5-person agencies on both speed and quality.
That smells like AI hype.
An agency combines multiple skills. While a person can be good at multiple skills, most people excel at one thing and that is going to be the main driver.
The problem is how will people know that person is only good at one skill and relies on AI for the other skills, or someone that can intervene when AI causes problems on multiple fronts?
Delivering with speed and quality for tasks that are already known is not a big achievement. They should have done it with a lot of unknowns and proofed they can stand that pressure.
There are people who excel at multiple skills, but they are rare. And they were asking their worth even before AI.
@xwero Fair, this does smell like hype in places.
You're right, agencies aren't just speed; they're bundled skills, crisis handling, and proven depth under unknowns. AI lets one person fake breadth for routine work, but when pressure hits multiple fronts, the cracks show fast. Clients can't easily spot "real polymath" vs. "AI crutches + one good skill."
That transparency gap is the quiet killer. ties straight to the accountability question... who owns it when the solo + AI stack breaks?
You said you're not on the business side, but have you seen any signals that clients are starting to demand more proof of "human depth" beyond speed/quality demos?
@chovy Great counterpoint.yes, the emerging "AI-augmented premium" tier is real and exciting. Solo operators with strong workflows outpacing agencies shows the force-multiplier effect in action.
The ownership concentration angle is sharp. one senior + AI can own the full stack but only if they have the judgment to catch confident hallucinations. That gap between multipliers and non-users feels like the new divide.
Have you seen pricing models shift yet (e.g., consultants quoting higher for "verified judgment" deliverables)? Or is it still early days?
i've caught my own AI tools introducing subtle bugs that look completely intentional, like renaming a variable to something more "descriptive" and breaking a reference three files down. the output looks professional and passes a quick review. you almost have to assume it's wrong and verify, which kind of defeats the speed advantage for solo operators
@nedcodes Thanks, this is exactly the trap.
AI renaming a variable to "more descriptive" and silently breaking references three files down is insidious because it looks professional, passes quick review, and still compiles. You end up assuming it's wrong and verifying everything, which kills the speed gain.
It's the confident hallucination problem again. output is polished, but ownership and deep understanding stay with the human. The solo operator still needs strong verification habits to not get burned. Whatβs your go-to check when something looks "too clean" after AI touch?
both? mostly peace of mind during the week. but the real value shows up friday when i skim through and notice the agent touched files i didn't ask it to touch. that's usually where the sneaky stuff hides. it's less about catching individual bugs and more about seeing drift over time.
@richardpascoe Haha, yeah the paradox is wild. Godot maintainers buried under AI-generated slop PRs while Unity's CEO is out here promising "AI-native game creation by March." It's the perfect snapshot of the moment. one side fighting quality erosion, the other side selling the dream.
Feels like the open-source world is the canary in the coal mine for unchecked generation.
@richardpascoe I agree the invisibility is what makes it dangerous. Most devs are heads-down shipping, not tracking Godot burnout or Gentoo's GitHub exit over Copilot concerns. The contradictions (Microsoft pushing it while reportedly avoiding it internally) are glaring once you look.
It's not niche drama; it's a slow structural crack in OSS norms and trust. I'm definitely thinking about pieces on platform governance/corporate influence and long-term community impact. junior pipeline is next, but these are high on the list.
What do you see as the earliest visible signal that OSS sustainability is cracking wider?
@richardpascoe I hear you. the "embrace, extend, extinguish" shadow over GitHub is hard to ignore, especially when the contradictions keep piling up. The fear that today's senior becomes tomorrow's AI analyst (and then nothing) is real for a lot of people, and it's not unfounded when you see maintainer burnout and role reshaping happening in real time.
What keeps me from full doomer mode is the belief that the high ground judgment, ownership, "what's worth building" doesn't get replaced; it just gets rarer and more valuable. But you're right that the danger is present and invisible to most heads-down devs.
I'm planning to dig into platform governance and long-term OSS impact soon.the junior piece is first, but these threads are helping shape what comes next. What one concrete step do you think could slow the erosion before it's irreversible?
@richardpascoe Thanks for the kind words and for bringing the bigger picture into focus. I appreciate it more than you know.
i agree the uphill battle is real, especially when contrarian noise drowns out the actual dangers. Stepping back makes sense when the energy cost gets too high. I've felt that pull myself sometimes.
Wishing you clarity and peace too, my friend. Thanks for the conversation, and take care out there.
π Accurate and uncomfortable.
Every generation says βthis tool will ruin real developersβ then quietly uses it in production. Stack Overflow walked so AI could run.
The funny part is AI isnβt exposing bad code. Itβs exposing bad accountability.
If no one owns the decisions, it doesnβt matter whether the code came from a human, AI, or a haunted keyboard.
Tools evolve. Judgment, context, and ownership are still the real senior skills.
Gatekeeping was never about quality it was about scarcity. AI just broke the illusion.
"AI isn't exposing bad code. It's exposing bad accountability."
That's the line. better than anything I wrote.
Detection tools keep missing this. you can't scan your way to ownership. the question was never "did a human write this" it was always "who answers when it breaks."
haunted keyboard ships fine if someone owns the decision. That's the whole thing
Exactly. Ownership is the missing layer everyone keeps trying to automate.
Code reviews, tests, and βAI detectionβ all help but none of them replace someone standing behind a decision. If it ships, someone owns it. If it breaks, someone answers.
The haunted keyboard analogy holds up surprisingly well π
Tools donβt kill quality lack of responsibility does.
"Someone standing behind a decision" is the part that doesn't automate.
Every layer we add reviews, tests, detection is trying to substitute for that. none of them do. they just make the absence more visible when something breaks.
I am not considering myself as a developer by any means but AI helped me to build full stack web app. My main concern as someone who isn't technical, is that you are fully at mercy of AI in terms of code integrity, quality, security, and that makes me feel uneasy.
@rkaspars Thanks for this perspective.it's really important.
I agree ai lets non-technical people ship full apps fast, but the unease about code integrity, quality, and security is real. When you're not deep in the syntax or debugging trenches, you're trusting the AI's output more than most devs do and that's where the black-box risk hits hardest.
The mitigation is usually shifting to ownership (e.g., "I understand the blueprint and can verify the big pieces") rather than line-by-line code. I am curious.what makes you feel most uneasy ?
It's a paradigm shift for sure. But for mee its like "blind leads deaf " Since I am working on a rather complex fintech app, code security is a big moat for me, an uneasy part is that I have to blindly trust AI and constant worry if it did the right thing, and that the code is actually secure or it's just saying it is, and later turns out I could have major security vulnerabilities within it. I will definitely invest more time in learning at least basics.
@rkaspars I agree. for complex domains like fintech, the "blind leads deaf" feeling is real. AI gives you massive speed to ship, but security/integrity is a moat you can't outsource. if it hallucinates a vuln or weak crypto, you own the fallout. That constant worry ("is it actually secure or just saying it is?") is exactly the accountability gap the piece is about.
Investing in basics is smart even if you're not deep in syntax, understanding "what could go wrong" and "how to verify" becomes the defensible ground...
Great post! AI isnβt here to replace developers, but to challenge how we define what a real developer is. The real issue is not whether AI can write code, but how weβve used gatekeeping and credentials to create those boundaries. What matters most is the judgment, accountability and context that only humans can provide. The developers who will succeed are the ones who can ask the right questions and make smart decisions.
Judgment and context are the real moat. Agreed.
The line that matters to me is βwe canβt review code faster than AI generates it.β Thatβs not a moral panic, itβs an operational bottleneck.
Practical mitigation Iβve seen work is shifting review from syntax to ownership artifacts:
If AI accelerates output, teams need governance that scales: fewer line-level debates, more clarity on blast radius, assumptions, and rollback paths. Otherwise the review loop becomes theater.
@dariusz_newecki_e35b0924c Thanks, spot on, the "can't review faster than it generates" bottleneck is the operational crisis right now.
Your mitigations are gold. the 2β3 sentence "intent + tradeoff + watch in prod" note forces real thinking, the "explain 3 lines" challenge without AI logs is ruthless for testing ownership, and spec-first constraints (edges, invariants, blast radius) before generation is exactly how to keep humans in the loop meaningfully.
This shifts review from theater to something scalable and defensible. I've seen similar patterns work in smaller teams. fewer line debates, more clarity on assumptions/rollback.
Have you found any pushback when introducing these (e.g., "too much ceremony"), or do they stick once the first bad PR gets caught?
Thereβs usually pushback but not because of ceremony.
Itβs because these rituals expose something uncomfortable: whether the reviewer actually understands whatβs being merged.
When review shifts from βdoes this compile?β to βunder what constraints does this fail?β, it raises the bar for everyone, including seniors.
What makes it stick isnβt catching a bad PR. Itβs when teams realize these artifacts reduce cognitive load later, especially during incidents.
The bigger pattern Iβve noticed:
AI increases code volume, but it also increases false confidence.
These practices arenβt about slowing velocity, theyβre about introducing friction at exactly the right layer.
The real scaling question isnβt βcan we review faster?β, itβs βcan we make reasoning visible?β
@dariusz_newecki_e35b0924c This is spot on.
The pushback isn't ceremony. it's the discomfort of being exposed ("do I actually understand this?"). When review moves from "does it compile?" to "under what constraints does this fail?", it forces real ownership.
"AI increases false confidence" is the perfect framing output looks professional, but the reasoning is often shallow. These practices make reasoning visible and reduce load later.
the Elnathan John framing is sharp. 'social architecture around writing' maps perfectly to how the industry treats credentials. i've been using AI tools daily for months now and the thing that actually changed isn't my output speed, it's which problems i bother solving manually vs which ones i hand off. the gatekeeping that worries me more than 'is this AI code' is the accountability gap. right now nobody has good answers for who owns the output when the agent wrote 80% of it and the dev approved it in a 30 second review.
@nedcodes Thanks.this is exactly the kind of real-world extension the piece needed.
The Elnathan framing clicks even harder when you put it against daily agent use. it's not about speed anymore, it's about curating which problems stay human. And yeah, that accountability gap is the sleeper issueβ80% agent-written code rubber-stamped in 30 seconds leaves a massive "who owns the breakage?" hole.
Have you started experimenting with any rituals/processes to close that gap? Like mandatory "interrogation" notes, m verification checklists, or even ownership logs for agent-assisted PRs?
nothing formal. i just don't approve diffs without reading every file change, which sounds obvious but it's tempting to skim when the agent touches 8 files and the first 5 look fine. the one thing that's helped is a cursorrule that logs changes to a changelog file, so end of week i can scan through and catch anything i glossed over in the moment
@nedcodes Thanks, this is exactly the kind of pragmatic fix I love hearing about.
The "read every file change" rule is obvious but powerful.AI touching 8 files tempts skimming, and that's where the subtle bugs hide. The changelog rule is smart. it gives you that end-of-week safety net without adding ceremony in the moment.
It shows how individual habits can close the accountability gap even when teams aren't ready for big changes. Have you found the changelog catches stuff you actually missed on first pass, or is it more peace-of-mind?
honestly i just diff everything now. git diff --stat before accepting anything, then i read every changed file even if cursor only touched one. the changelog rule helps too because end of week i can spot patterns, like "oh it kept renaming variables in that module." it catches stuff maybe 1 in 10 times, but that one time saves hours of debugging something that compiled fine but broke at runtime.
@nedcodes This is brilliant "git diff --stat" + reading every changed file is the kind of simple habit that saves real pain.
The changelog for end-of-week drift spotting is clever too. catching "it touched files I didn't ask for" is huge when AI quietly expands scope. That 1-in-10 save-hours moment is exactly why these rituals stick. π
I see this shift too. We are moving from 'Code Authors' to 'Infrastructure Owners' in my view.
If AI generates the Terraform code, fine. But if I merge it, I own the state file. If I deploy the Lambda, I own the cold starts.
The gatekeeping based on syntax knowledge is dying I think. The gatekeeping based on
'who can actually fix production when the AI hallucinated a wrong config' is just beginning.
Great perspective Daniel on accountability vs. creation. I hope you keep up the good work.
@alifunk Thanks, love the "code authors to infrastructure owners" framing. Spot on.
AI can spit out Terraform or Lambda code fast, but merging/deploying means you own the state file mess, cold starts, or hallucinated config that takes down prod. Syntax gatekeeping is fading; ownership gatekeeping (who fixes the 3am hallucination) is rising fast.
That's the real shift accountability isn't optional anymore.
As someone working in this space, what's one production horror you've seen (or prevented) where ownership made the difference over generation?
Iβm currently deep diving into Terraform , and I test exactly these AI scenarios in my labs.
I asked an AI to refactor some resource definitions to match best practices.
The code came back clean and valid.
But because it changed the logical name of a database resource, terraform plan showed a forced replacement (destroy/create).
In a lab, it's a learning moment. In production, without reviewing the plan, itβs a resume-generating event.
The AI optimized for Syntax, but completely ignored the State. That's the danger
@alifunk Thanks,that's a perfect, brutal example.
AI nailed the "clean and valid" syntax refactor, but completely ignored the state implications (logical name change β destroy/create). Lab learning moment; prod resume event. It optimizes for what looks good on screen, not what survives in reality.
This is why ownership and review have to stay human especially for IaC where state is unforgiving. Have you found any prompts or rituals that force AI to at least flag potential state changes?
The framing of "gatekeeping panic" is exactly right.
What AI actually threatens: the ability to charge premium rates for shallow work. Writing boilerplate, building the same CRUD app for the nth time, copy-pasting Stack Overflow answers β that stuff gets commoditized first. That's not gatekeeping, that's just market correction.
What doesn't get commoditized: judgment about which problem is worth solving, ability to debug the thing that fails in a way nobody expected, knowing which trade-offs matter in your specific context. Those require experience + domain knowledge. AI doesn't have that.
The developers who are panicking are often the ones who've been paid senior rates for junior work. That's uncomfortable but it's worth being honest about.
Paid senior rates for junior work" . That's the line.
The panic makes more sense through that lens. it's not fear of replacement exactly. it's fear of accurate pricing. AI doesn't eliminate the role, it just makes the rate compression visible faster than anyone expected.
The uncomfortable part. some of that mispricing was deliberately obscured. credential theater, whiteboard interviews, YoE requirements. none of that was measuring judgment. it was measuring access.
curious whether you think the correction happens gradually or hits a cliff. my sense is gradual for experienced people with real domain depth, sudden for anyone whose value was mostly in the credential.
Interesting take on how AI disrupts gatekeeping in software dev rather than developers themselves. Honestly, I agreeβAI excels at routine tasks like boilerplate code and testing, freeing humans for architecture and innovation, but it amplifies those who adapt while exposing skill gaps. The real threat is complacency, not replacement; teams embracing AI as a partner will outpace others in 2026. Great read to spark that shift in mindset.
The framing here is spot-on. The threat isn't "AI replaces programmers" β it's that AI raises the floor so high that "can write code" stops being a differentiator.
I've been thinking about this from the hiring side: when every candidate can ship a working CRUD app using Cursor, what do you actually interview for? We've started focusing almost entirely on judgment calls β "here's a tradeoff, which way do you go and why?" β because that's the skill AI can't replicate yet.
The uncomfortable part: this means a lot of junior roles are going to require skills we used to expect from mid-levels. The entry point is shifting up.
Curious how other teams are adapting their hiring for this?
Judgment calls. here's a tradeoff, which way do you go and why?" is exactly the right pivot. That's the interview question AI can't prep you for by generating a correct answer, blc there isn't one.
The uncomfortable extension of your point. if entry-level now requires mid-level judgment, we've skipped the ramp that builds mid-level judgment in the first place. The bar moved up but the ladder didn't come with it.
Youβre putting your finger on the uncomfortable part that a lot of teams are quietly feeling but not naming.
Iβm seeing a similar shift: when anyone with Cursor/Copilot can ship a CRUD app, βcan you build X?β is no longer the filter. The interviews that feel most useful now are the ones that stress-test judgment under constraints, not raw implementation.
For juniors, I suspect the bar will be less βcan you write code?β and more βcan you reason about code the AI wrote and take ownership of it?β
Curious: in your process, whatβs the single signal youβve found most predictive that someone will own systems, not just ship features?
This really hits the right nerve. The panic isnβt about AI writing code, itβs about accountability and status.
The line about βwho gets paged at 3amβ says more than all the AI detector debates combined. Tools have always shifted the boundary of what βreal programmingβ means. What doesnβt change is responsibility.
AI speeds things up. It doesnβt remove ownership.
Great perspective.
Who gets paged at 3am" cuts through everything blc it's not theoretical. You can't delegate that. You can't prompt your way out of it at 3am when the system is down and the context is all in your head.
AI speeds up the build. It doesn't show up for the incident.
I also think generative tools provide a kind of cognitive freedom. They free up mental resources so you can focus more on architecture and decision-making instead of getting stuck on implementation details.
They also make it incredibly fast to test ideas in practice. You can quickly validate a theory, compare approaches, and see which solution is actually simpler. I donβt need to think about how to parse a date anymore - I know itβs doable. In the past I wouldβve Googled it or figured it out in an IDE anyway. The knowledge was accessible; now the execution is just faster.
Still, thereβs this lingering βno pain, no gainβ thought. Maybe we sometimes force ourselves to struggle because we associate difficulty with growth. When something feels too easy, our brains almost resist it - like it doesnβt count.
Future generations (assuming Skynet doesnβt wipe us out, or we donβt end up in some kind of Butlerian Jihad scenario) probably wonβt even question this shift. Theyβll just think - and the machine will write the code
@nomad4tech both your comments are spot on and actually capture the tension.
"Cognitive freedom" is exactly right. AI clears the implementation fog so you can focus on architecture, decisions, and quick idea validation. the "no pain, no gain" guilt is real, but future devs probably won't feel it at all.
The "missing the moment" fear is the quiet one for a lot of us. falling behind if we don't adapt fast enough. You're right that complex systems still need foundational understanding and architectural thinking (boundaries, modularity) to even direct AI effectively. Otherwise you end up with black-box volume nobody fully owns.
The volume worry is huge too. if generated code everywhere makes starting fresh easier than understanding legacy, we risk a new kind of knowledge collapse.
Yeah, Iβve been thinking about knowledge from another angle.
Since generative AI mostly recombines existing knowledge and tools rather than creating fundamentally new ones (at least not at a native, foundational level), I wonder if we might eventually hit a kind of stagnation. A point where we just keep reusing and reshuffling old approaches instead of inventing new paradigms.
Iβm not even sure how to frame this properly, but it feels like we could slowly lose the people who operate at the lowest levels β the ones who invent new abstractions, new primitives, new ways of thinking. If most work becomes orchestration and recombination, whoβs going to push the boundaries underneath?
Maybe I didnβt express it perfectly, but thatβs the direction of my concern
You're naming something real and I don't think you've expressed it poorly at all.the concern is precise.
If Ai mostly recombines existing knowledge, the people who generate new primitives become rarer and more valuable. But here's what worries me more. we might not notice the stagnation until we're deep in it. Recombination looks like progress. New abstractions look like noise. By the time we realize we've been reshuffling the same deck for a decade, the people who knew how to invent new cards are gone.
This worries me too - not so much that AI will replace me, or that my knowledge will become useless (most of that knowledge was already in Google and Stack Overflow anyway). What actually concerns me is missing the moment - missing the opportunity to truly learn how to use this tool - and falling behind other developers who are already using it to build applications.
Yes, maybe a PM could generate a program. But it would likely be something relatively simple and limited in scope - especially if the manager isnβt a former developer. For complex systems, you still need at least a foundational understanding. Ideally, you need architectural thinking: the ability to design boundaries, isolate services, and structure an application so it can grow in a modular way - almost like building with Lego blocks.
Working with generative tools makes much more sense in the context of well-isolated components or orchestrated services. Otherwise, the application can easily grow into something nobody fully understands. And at that point, no one will be able to clearly explain to the AI what needs to be added, changed, or fixed.
So in my view, if a good programmer has their ego under control, this tool will simply make them an even better programmer.
P.S. Thereβs another concern: the sheer volume of generated code and applications might become so overwhelming that it will actually be easier to generate your own solution each time rather than trying to understand someone elseβs. But maybe thatβs a topic for another article - Iβve run out of tokens for now π
Really thoughtful post.
I agree the real issue is not AI replacing developers, but what happens to learning and judgment when code is generated instantly. AI can help us move faster, but we still need strong fundamentals and ownership when things break.
Curious to see how this balance evolves.
I agree the βAI detectorβ discourse is mostly noise and that accountability matters more than authorship, but I think the post over-credits gatekeeping as the primary driver of concern and underplays two very practical issues:
1) reliability under uncertainty: LLM output can look plausibly βdoneβ while quietly violating invariants, security assumptions, or domain constraints, and the cost of discovering those mismatches in production is often nonlinear, and
2) the compression of skill formation: If teams replace exploratory junior work with AI throughput, you donβt just lose βjudgment,β you lose the dense feedback loops that create shared mental models (debugging, tracing, postmortems, reading legacy code) that make accountability meaningful rather than ceremonial.
In other words, itβs not simply that βsocial hierarchyβ is threatened; itβs that AI increases the surface area for subtle failure while simultaneously incentivising organisations to erode the very learning and review capacity needed to manage that risk. So the response canβt be only βjudgment and accountabilityβ; it also has to include process and architectural changes that make correctness, security, and operability easier to verify than to assume.
"Accountability meaningful rather than ceremonial" . That's the gap in the piece and you've named it precisely.
Owning the commit without understanding what you merged is ceremonial accountability. the skill formation argument in my junior piece gets at part of this. But you've identified something more specific.it's not just judgment that atrophies, it's the shared mental models that make judgment possible. postmortems, tracing, reading legacy code together. That's where teams build the common language that makes "who's accountable" mean something beyond a name on a commit.
The process/architectural answer is the part I haven't written yet. Allen Holub's thread today is pointing at the same gap from the architecture direction. Bad structure makes verification expensive, good structure makes it tractable. curious what that looks like in practice for teams already mid-restructuring, where the engineers who understood the system are already gone.
The "who gets paged at 3am" framing is exactly right β accountability is the filter that AI can't automate away.
I've been running an automated content pipeline with multiple AI agents for a few months now. The thing that surprised me most: even when AI writes 90% of the code/content, someone still has to own the blast radius when things go wrong. That ownership β knowing the system deeply enough to debug it at 2am β is still very human.
The shift I'm noticing is that "senior" increasingly means being the human in the loop who can validate AI outputs, not the person who can produce them. Orchestration over generation.
For developers worried about this transition: the ones thriving aren't fighting AI, they're building systems that leverage it β and staying the person responsible when those systems fail.
"Orchestration over generation" is the cleanest version of this I've seen. The blast radius point is important too. owning a system deeply enough to debug it at 2am isn't something you can acquire after the incident. You build it by running the system, making mistakes, and living with them. The worry I have is that if AI handles the routine work that builds that familiarity, the people who come after us might own systems they fundamentally don't understand. They'll be good at the normal case and helpless at the edge case.
Really thoughtful take.
It doesnβt feel like AI is threatening software development itself more like itβs challenging how we define value as engineers.
If writing code becomes easier, then system thinking, trade-offs, and responsibility probably matter even more.
Curious how you think this shift will affect junior engineers entering the field?
That's exactly the tension I keep coming back to. If the bottleneck shifts from code production to judgment about systems, the people who never got to build judgment blc AI handled the reps end up stranded at the transition. I'm writing a follow-up specifically on junior engineers. Should be out in a few days.
Really thought-provoking piece. I've been running an AI agent 24/7 on a Mac Mini β it handles cron jobs, content publishing, health checks, even spawns sub-agents for specific tasks. Built 80+ automation scripts in a few days.
What I've found: AI doesn't replace thinking. It replaces the boring parts so you can think more. The gatekeeping panic misses this β the people who'll thrive aren't 'prompt engineers,' they're the ones who understand what to build and can verify AI's output.
The real threat isn't AI writing code. It's developers who refuse to evolve their role from 'code typist' to 'system architect.'
@maxxmini
Running a 24/7 AI agent on a Mac Mini + spawning sub-agents + 80+ scripts in days is the exact multiplier scenario the piece is pointing toward. AI takes the boring/cron/content grind, leaving you to think about what matters.
Your line "from 'code typist' to 'system architect'" is perfect. the people thriving aren't prompt wizards; they're the ones who own the architecture, verify output, and decide what's worth building.
I am curious. in your setup, what's the hardest part of verification when the agent touches many files/tasks? Any quick rituals that help catch drift or hallucinations early?
Context is the moat. AI has no memory of your last production incident. You do. That's worth protecting.
I really like how you framed this. It's essential to be asking the right questions, questions that are grounded in reasoning, not purely emotional.
@javz Thanks , appreciate that.
Grounded questions over emotional panic is the whole goal. Glad the framing landed for you.
The uncomfortable truth is AI doesnβt threaten programming. It exposes shallow programming.
If your value was typing speed or syntax recall, yeah, youβre in trouble.
If your value is judgment, architecture, tradeoffs, and ownership , youβre more valuable than ever.
@heytechomaima Nailed it."AI exposes shallow programming" is the uncomfortable truth in one line.
If your edge was syntax recall or typing speed, it's gone. If it's judgment and ownership, you're indispensable. That's exactly where the reframing lands for me too.
Thanks for the sharp summary.
Appreciate that. π
The shift toward ownership and judgment feels inevitable now.
There is a difference between software engineering practices and writing code.
Many things like static code analysis, all sorts of test automation, code reviews, pen tests, coding conventions, documented architecture decisions, secure coding, clean code and GoF pattern discussions after 3 beers exist to protect our code bases from bad decisions.
These practices are now more important than ever since this overly-confident know-it-all with 5 PhDs in algorithms showed up who has no clue about anything beyond the context windows.
@sebhoek Spot on.this nails why the safeguards you listed (tests, reviews, clean code, even those post-3-beers GoF debates) are non-negotiable now more than ever.
AI's "5 PhDs in algorithms" confidence is impressive for generation, but it stops at the content window.human judgment fills the rest.
@leob Thanks for the "Amen" and seconding the summary. That "What this means" bit is where the reframing felt strongest for me too.
Thank you for this post and bringing this topic to light.
Amen - seconding this, especially how you summarized it in the last section "What this means" !
(I my self have been working on ai text detection as a side project, and have understood the limit, if you have some time will love some feed back. You can get in on chrome webstore, search HiYo)
Nicely done π
Programming is thinking not typing
The brave new world, that's for sure.
all that info
ai des shii