I remember times when solving a problem in code meant adding print statements, commenting out blocks of code, rerunning the code endlessly… Sometimes you would just stop and do something else, then you would get an "Aha" moment out of nowhere: It was just an unchanged parameter. Now I don't remember the last time I did this.
Yesterday, I described a bug in my Python to Claude. It wrote a fix in 10 seconds. This is the paradox we are living through. I'm watching the profession I've trained for years transform and lose its soul so radically that I sometimes wonder if I chose the wrong major. Engineering, and I mean real engineering, the kind where you spend days wrestling with memory leaks and segmentation faults, is fading away, even gone already.
The evidence of death is everywhere around you. My friends who graduated a couple of years ago walked into junior developer roles with barely a portfolio and a bootcamp certificate. They learned on the job: fixing bugs, implementing features, doing the unglamorous work that taught them how systems actually worked. Today, those same companies aren't hiring juniors. Not because they don't need the work done (they do) but AI is doing it. Faster, cheaper, and increasingly better than a nervous 22-year-old on their first day.
Github Copilot writes boilerplate faster than I can type. ChatGPT debugs my code while explaining what went wrong. I watched a classmate build and deploy a full-stack application in a weekend, something that they could not have done without the help of AI.
At first, this felt like cheating. Weren't we supposed to suffer through correctly aligning grids and flex boxes into their intended positions? Wasn't the struggle the point? I resisted using AI tools for months, felt a guilt every time I accepted offer from an AI assistant, convinced that I was preserving something sacred: the purity of solving problems with my own mind, my own fingers on the keyboard. I looked down on classmates who leaned heavily on AI, thinking they were taking shortcuts, that they weren't "real" engineers.
I was wrong. Or rather, I was clinging to a definition of "real" that was already obsolete.
Now there is another way, and it's revealing an uncomfortable truth: much of what we called "engineering" was actually just implementation. Translating ideas into code, fighting with compilers, reading documentation, copying patterns from Stack Overflow. Important work, certainly, and not necessarily bad either. But was it engineering? Or was it just… typing? The thing is, I loved that typing. I loved the flow state of solving a problem piece by piece. I loved feeling clever when I finally figured out why my async function wasn't awaiting properly. There was honor in the struggle.
But here's what I couldn't admit to myself: I wasn't mourning the loss of engineering. I was mourning the loss of the way I proved my worth. If AI can implement anything I describe, what am I worth? That question kept me up at night. It made me angry. It made me want to reject the tools entirely, to go back to doing things "the right way," even as I watched the world move on without me.
Here is the truth that I've been evading for a couple of months now: Engineering as we know it is dead. But it is being reborn into something that might actually be closer to what engineering was always supposed to be.
The king is dead. Long live the king.
I didn't want to write this article. I wanted to write a manifesto about how AI was ruining software development, how we were losing something irreplaceable, how the craft was dying. But as I started researching, as I talked to senior engineers and tech leads, as I forced myself to actually use these tools instead of avoiding them, I realized I was watching something else entirely: not the death of engineering, but its evolution from a primarily technical discipline into a strategic one.
This is a warning: if you're entering this field now, or if you've been coasting on technical skills alone, the ground is shifting beneath your feet faster than you realize. The profession is bifurcating. There will be room at the top for those who can think strategically, who can wield AI as a force multiplier, who can ask the right questions and make the right decisions. But the middle is collapsing, and the bottom - the entry-level positions where generations of engineers learned their craft - is vanishing entirely.
I'm still not happy about this change. Some part of me will always miss the days when engineering meant grinding through problems until 3 AM, when your value was measured in lines of code and bugs fixed. But acceptance isn't the same as happiness. And what I've come to accept is this: the engineers who thrive in this new era won't be the ones who write the best code. They'll be the ones who know what to build, why to build it, and how to orchestrate AI and humans together to make it happen.
That's a different skill set. A harder one, maybe. And it's the one I'm now racing to develop, even as I mourn what we've lost.
What We're Losing
Code as Craft
There was something almost monastic about the way we used to write code. You'd open your editor - maybe Vim if you were trying to prove something, VS Code if you were practical - and you'd start with a blank file. Line by line, function by function, you'd build something from nothing. Every variable name was a decision. Every loop structure was a choice. You felt the weight of it.
I remember spending an entire afternoon perfecting a recursive function. Not because it was complicated, but because I wanted it to be elegant. I refactored it four times, each iteration slightly cleaner, slightly more readable. When I finally got it right, when the logic flowed like water, I felt like a craftsman who'd just finished sanding a piece of furniture to perfection. No one would ever see those intermediate versions. No one would know how much I'd agonized over whether to use a ternary operator or an if-statement. But I knew. And it mattered.
That's what we're losing. The craft of it.
We used to talk about code the way chefs talk about knife skills or painters talk about brushwork. There was pride in knowing six different ways to iterate over an array in JavaScript, in understanding the subtle differences between == and ===, in being able to spot a memory leak by instinct alone. We collected programming languages like merit badges. "I know Python, JavaScript, Go, Rust, and I'm learning Haskell," we'd say, as if each one made us more valuable, more legitimate. Do you remember last time you had this type of conversation?
Syntax mastery was currency. If you could write a list comprehension in Python that did in one line what took others ten, you were respected. If you could debug C++ template errors without crying, you were practically a wizard. We built our identities around these technical competencies, these hard-won skills that took years to develop.
And it felt good. God, it felt good to be good at something that others found difficult.
But here's the thing I didn't want to admit: a lot of that "craftsmanship" was just memorization dressed up as expertise. Knowing that Python uses self and JavaScript uses this isn't profound insight—it's trivia. Being able to recall the exact syntax for opening a file in three different languages isn't wisdom—it's the kind of thing that made us feel smart but that Google could answer in half a second.
Still, there was beauty in it. There was satisfaction in solving a problem from first principles, in building something with your own mental model of how computers work. When you wrote a sorting algorithm from scratch, you understood sorting in a way that using .sort() never taught you. When you implemented a binary search tree by hand, you felt the data structure in your bones.
I miss that feeling. I miss opening a problem and knowing that the only way through it was to think, deeply and carefully, about every single line I was going to write. I miss the meditative state of coding, where hours would pass and I'd look up to find it was dark outside and I'd solved something complex and I felt earned the solution.
AI doesn't miss any of that. Because AI never experienced it. It just produces the answer, instantly, without the journey. And maybe that's more efficient. Maybe that's better. But something is lost when you skip the struggle. Something human.
The Traditional Career Ladder
A friend of mine barely scraped through his algorithms class, didn't go to a top school, his GitHub had maybe three projects, one of which was a half-finished todo app (it was pretty common back then). He did a twelve-week bootcamp in React, built a portfolio site that looked like every other bootcamp portfolio site, and got hired as a junior developer at a mid-sized company making a good amount of money.
That was 2022.
He spent his first year doing exactly what you'd expect: fixing bugs in the styling, adding form validation, implementing features that senior devs had already architected. Boring work. Repetitive work. The kind of work that made him question if they'd made a mistake choosing this career. But here's what happened during that boring year: they learned how a real codebase worked. They learned why certain patterns existed. They learned to read other people's code, to understand the trade-offs they'd made, to see the difference between clever code and good code.
They learned by doing the unglamorous, repetitive tasks that no one else wanted to do.
By year two, they were trusted with more responsibility. By year three, they were mentoring the next batch of juniors. They weren't exceptional - they were just following the path that thousands of engineers had walked before them. The 10,000-hour path. Start at the bottom, do the grunt work, absorb knowledge through repetition and exposure, gradually climb the ladder.
That path is gone.
I talked to them last month. Their company just went through a hiring freeze, but that's not quite accurate - they're still hiring senior engineers. What they're not hiring is juniors. The junior role he got in 2022 doesn't exist anymore. Not because the company is struggling, but because Github Copilot does what they spent their first year doing. The bug fixes, the boilerplate, the straightforward implementations - AI handles that now, faster and without needing health insurance.
"So how do people learn?" they asked and sounded genuinely worried. Not for himself - he's safe, he's mid-level now - but for the people coming after him. How do you get experience when the experience-getting jobs have evaporated?
It's a real question, and I don't have a good answer.
The traditional career ladder wasn't particularly elegant or efficient, but it worked. Juniors did simple tasks and learned from them. Mid-level engineers did complex tasks and learned from mistakes. Seniors designed systems and learned from watching them succeed or fail at scale. Everyone paid their dues. Everyone climbed the same mountain, step by step.
Now the first hundred steps of that mountain have been removed. You either start halfway up - if you're talented enough, lucky enough, connected enough - or you don't start at all.
Bootcamps are still taking people's money, still promising career changes, still teaching React hooks and REST APIs. But what are they training people for? The jobs that wanted bootcamp grads wanted them because they were cheap labor who could handle simple tasks. If AI handles those tasks for free, what's the value proposition? "Learn to code" used to be empowering advice. Now it feels almost cruel.
The 10,000-hour rule hasn't been repealed - you still need experience to develop judgment, to understand systems, to know what good looks like. But the ways we used to accumulate those 10,000 hours, the entry-level positions where you learned by doing repetitive work, those are vanishing. We're creating a profession where you need experience to get experience, but the experiences that create experience no longer exist. A giant Catch-22 situation.
It's not sustainable. But it's happening anyway.
Engineering as Implementation
There was a time - and it wasn't that long ago - when the hard part of software engineering was figuring out how to build something. Someone would describe what they wanted, and your job was to translate that vision into working code. That translation was the skill. That was what separated engineers from non-engineers.
You'd get a ticket: "Add user authentication to the dashboard." Simple request, right? But implementing it meant knowing OAuth flows, understanding JWT tokens, figuring out where to store sessions, handling edge cases like token expiration and refresh logic, dealing with CORS issues, and probably spending two hours debugging why the redirect wasn't working in production even though it worked locally.
That was the job. Taking a concept and wrestling it into reality.
And to do it well, you needed deep knowledge. You needed to know your framework inside and out. You needed to understand your language's quirks and gotchas. You needed to have read enough documentation, made enough mistakes, and fixed enough bugs that you'd developed instincts for where problems lived.
To be honest, I was good at this. I was pretty good at this. I spent hours reading through API documentation, understanding the difference between REST and GraphQL, learning the ins and outs of whatever framework my internship was using that summer. I built a mental encyclopedia of how things worked - how React's reconciliation algorithm optimized renders, how Python's GIL affected multithreading, how database indexes actually improved query performance.
This knowledge felt like power. When someone would ask, "Can we do this?" I could answer immediately because I knew the tools, knew the limitations, knew the patterns. I was valuable because I had accumulated this vast repository of implementation knowledge.
Stack Overflow was our collective brain. We'd encounter a problem, search for it, find someone who'd solved it three years ago, adapt their solution to our context, and move on. There was a whole culture around this - jokes about copying code from Stack Overflow, memes about changing variable names to make it "yours," the understanding that engineering was as much about knowing where to find answers as it was about having them memorized.
We were hunters and gatherers, searching for solutions across documentation sites and GitHub issues and blog posts and YouTube tutorials. The skill was in knowing where to look, how to search effectively, how to recognize a good solution versus a hacky one.
I spent countless nights on this hunt. I'd have ten browser tabs open - official docs, a Stack Overflow thread, someone's blog post from 2018, a GitHub issue, a Reddit discussion - piecing together a solution from fragments scattered across the internet. It felt like detective work. It felt like being smart.
And then Claude read my question, accessed the entire context of my codebase, and gave me the answer in thirty seconds.
Not a fragment. Not a clue. The complete, working solution, with explanations for why it worked.
The first time this happened, I was angry. It felt like cheating. It felt like the machine was stealing the part of the job that made me valuable. If anyone can just describe a problem and get a solution, what makes me special? What did I spend four years in college for?
But the anger faded, and what replaced it was worse: the realization that most of what I called "engineering" was just implementation. Just translation. The actually hard parts - deciding what to build, why to build it, whether it should be built at all - those were always different skills. We just conflated them because the same people did both.
We told ourselves that because implementation was time-consuming and sometimes frustrating, it must also be the hard part. But hard doesn't always mean valuable. And now that implementation is essentially free, we're forced to confront what our job actually is.
Or was.
The Transformation
The AI Revolution in Practice
Let me give you some numbers, because the numbers are what finally broke through my denial.
GitHub Copilot now writes approximately 46% of code across all programming languages on GitHub. Nearly half. In some languages like Python, it's over 60%. These aren't suggestions that developers reject, these are accepted completions that ship to production. McKinsey found that developers using AI assistants complete tasks 35–45% faster than those who don't. Google's internal studies showed that their engineers using AI tools spent 25% less time on code reviews because the AI-generated code had fewer bugs than human-written code.
I didn't want to believe these numbers. I thought they were inflated, cherry-picked, measuring the wrong things. So I ran my own experiment. I spent a week building a feature the old way; just me, my editor, and Stack Overflow. Took me about 20 hours across five days. The next week, I rebuilt the same feature from scratch using Claude and Cursor. Four hours. One afternoon.
The code quality? Honestly, the AI version was better. More consistent, better documented, fewer edge cases missed. That hurt to admit.
Replit, the online coding platform, reported that their AI-assisted users build projects 3x faster than without AI. Sourcegraph found that 75% of developers using their AI coding assistant Cody said it made them more productive. But here's the stat that really matters: companies are shipping products with engineering teams half the size they would have needed three years ago.
I know a startup that launched with four engineers instead of the eight they'd budgeted for. They're growing faster than their projections, handling more users than planned, and they still haven't hired engineers five and six. Not because they're understaffed, it's because they don't need them. Each of their four engineers, equipped with AI tools, outputs what two engineers did in 2022.
The revolution isn't coming. It's here. It's in production. It's in every company I talk to, every developer I know, every codebase being written today.
Cursor, an AI-first code editor, has become the tool of choice for a generation of developers who never knew coding without AI. They don't remember the before times. They learned to code by describing what they wanted and watching AI generate it. And you know what? They're productive. They ship features. They solve problems.
They just solve them differently than we did.
The tools keep getting better too. Claude can now understand entire codebases, not just individual files. It can refactor across multiple files, maintain consistency, suggest architectural improvements. Cursor can predict not just your next line but your next intention. GitHub Copilot X can explain code, write tests, fix bugs, and generate documentation - all while you watch.
This isn't a prototype in a lab. This is the daily reality of software development in 2025. And the pace of improvement is accelerating, not slowing down. Every month these tools get smarter, faster, more capable. Every month, the gap between AI-assisted developers and those working without AI widens.
The question isn't whether this transformation is happening. The question is what we do about it.
The Skill Shift
I had to relearn my job without anyone telling me that's what I was doing.
It started subtly. I'd write a comment describing what I wanted to do, and Copilot would generate the implementation. At first, I'd review every line, checking for errors, making sure I understood it. But gradually, I realized I was spending more time writing those descriptive comments than I was writing actual code. The comments became the work. The code became the output. Honestly, I don't think that I am writing code anymore.
That required a different skill set. Instead of thinking, "How do I implement this?" I was thinking, "How do I describe this clearly enough that the AI generates what I actually need?" It's a subtle distinction, but it changes everything.
Good prompt engineering isn't just about being clear - though that helps. It's about understanding what the AI can and can't do, what context it needs, what assumptions it might make. It's about breaking complex problems into pieces that AI can handle, then orchestrating those pieces into a coherent whole.
I've started thinking of myself less as a coder and more as a conductor. I'm orchestrating multiple AI systems - one for code generation, another for debugging, another for documentation, another for testing. My job is to direct them, to make sure they're working together, to catch when one makes a mistake that affects the others.
And architecture? Architecture has become everything.
When implementation was expensive, you could get away with mediocre architecture because changing it later was only slightly more painful than getting it right the first time. Now? Implementation is free. You can generate ten different implementations in the time it used to take to write one. But if your architecture is wrong, all ten implementations are wrong.
I spend more time now thinking about system design than I ever did before. How should these services communicate? What's the right level of abstraction? Where are the natural boundaries? What are the failure modes? These questions used to be important but optional - you could ship with okay architecture and fix it later. Now they're essential, because the architecture is the only part that's actually hard.
A senior engineer at my company told me something that stuck: "Code is no longer the product of engineering. System design is the product. Code is just how we materialize the design, and AI handles the materialization."
That reframing helped. I stopped feeling like I was losing my job and started feeling like my job was evolving into something that actually matched what senior engineers always told me the job was supposed to be: thinking about systems, not syntax.
But here's what no one talks about: this is a harder job. Writing code was straightforward - you either got it working or you didn't. Directing AI requires judgment. You need to know when the AI's solution is good enough, when it's wrong but fixable, and when you need to scrap it and try a different approach. You need to understand not just what works, but what works well, what will scale, what will be maintainable.
You need to be better at engineering to be an AI-assisted engineer. Not worse. The bar went up, not down.
The New Bottlenecks
"Can we build this?" used to be a real question. Now it's almost insulting to ask. Of course we can build it. We can build anything. The AI will generate whatever code we describe, in whatever language, using whatever framework. Implementation is no longer the constraint.
The question that actually matters is: "Should we build this?"
And that's a completely different question. It requires understanding the business, understanding users, understanding trade-offs. It requires judgment that no AI can provide because the AI doesn't know your company's strategy, your users' actual needs, or the technical debt you're already carrying.
I watched a junior developer - one of the last ones we hired before the freeze - use Claude to build an entire microservice architecture in a day. Impressive, right? Except we didn't need a microservice architecture. We needed a simple CRUD API that could be maintained by a small team. His solution was technically sophisticated and completely wrong for the problem.
The AI built what he asked for perfectly. But he asked for the wrong thing.
That's the new bottleneck: knowing what to ask for. Understanding the actual problem deeply enough that you can direct the AI toward the right solution, not just a solution. This requires context that takes years to develop - understanding how systems fail, how users actually use software, how technical decisions have consequences months or years later.
Security has become a major concern in ways it never was before. When humans wrote code slowly, line by line, we had time to think about security implications. We'd pause and consider: "Wait, am I validating this input? Could this be exploited?" When AI generates hundreds of lines in seconds, those pauses don't happen naturally. You have to deliberately review for security issues, and you have to know what to look for.
I've seen AI-generated code with SQL injection vulnerabilities, with authentication bypasses, with exposed secrets. Not often, but enough that blind trust is dangerous. The AI doesn't understand security the way a experienced engineer does. It knows patterns, but it doesn't understand adversarial thinking.
User experience is another bottleneck. AI can implement any UI you describe, but it can't tell you if that UI is actually good. It can't tell you that your five-step form would be better as one page, or that your navigation makes sense to you but will confuse users, or that you're asking for information you don't actually need.
And then there are the AI's own limitations. It hallucinates libraries that don't exist. It suggests patterns that worked in older versions of frameworks but are deprecated now. It confidently generates code that looks right but subtly doesn't do what you think it does. You need to understand these failure modes, to know when to trust the AI and when to verify everything.
The irony is that as implementation gets easier, everything else gets harder. Or maybe not harder - maybe just more visible. Maybe these were always the hard parts, but we didn't notice because we were too busy fighting with syntax and reading documentation.
Now that the easy parts are automated, all that's left is the hard stuff. And the hard stuff requires expertise, judgment, and experience that can't be automated. Not yet, anyway.
Rebirth
Engineering as Strategic Discipline
I asked a senior engineer who's been in the industry for about ten years if he felt threatened by AI, if he worried about his job. He laughed, not in a dismissive way, but like I'd asked if he was worried about calculators replacing mathematicians.
"My job was never writing code," he said. "That was just the medium. My job is judgment."
I've been thinking about that a lot. Judgment. It's an old-fashioned word, almost quaint in an industry that loves to quantify everything. But the more I use AI tools, the more I realize that judgment is the only thing that actually matters now.
When implementation is free, the value is in knowing what to implement. When you can generate any solution instantly, the value is in knowing which solution is right. When you can build anything, the value is in knowing what's worth building.
This is strategic thinking, not technical execution. It's asking "why" before asking "how." It's understanding the business problem deeply enough that you can propose solutions that actually move the needle. It's having enough context about users, markets, and technology trends that you can make bets about what will matter in six months or a year.
I used to think this was product management's job, not engineering's. I was wrong. The best engineers were always doing this - they just had to do the implementation too. Now that implementation is handled, the strategic thinking isn't optional anymore. It's the entire job.
This means engineers need to develop skills we used to think were outside our domain. We need to understand business models and unit economics. We need to talk to users and understand their actual problems, not just their feature requests. We need to think about go-to-market strategy, about what gives a product a competitive advantage, about technical decisions as business decisions.
A friend who works at a Series B startup told me their engineering team now sits in on strategy meetings. Not to take notes or answer technical feasibility questions, but as full participants in deciding what the company should build next. Because the engineers are the ones who understand both what's technically possible and what's technically maintainable, and in a world where you can build anything, those constraints are what matter.
The engineers who thrive now aren't the ones who can implement the most elegantly. They're the ones who can look at a business problem and say, "Here's what we should build, here's why, and here's the simplest architecture that will work." The implementation is assumed. The judgment is valuable.
I'm still learning this. Still training myself to pause before diving into code, to ask more questions, to think strategically instead of tactically. It's harder than writing code ever was, because there's no right answer you can verify by running tests. There's only judgment, and judgment takes experience.
But this is what engineering is becoming: a strategic discipline where technical knowledge enables better business decisions, not where business requirements get translated into code.
The AI-Native Engineer
So what does an engineer actually do all day in 2025?
I'll tell you what I did yesterday for a class project, because I think it's representative of what the actual job has become:
Started the morning by reviewing the requirements for our software engineering project. Spent 45 minutes thinking about the architecture: Where should this logic live? What's the right level of abstraction? What are the failure modes? Also, how am I going to scaffold my prompt? Didn't write any code. Just thought and drew diagrams and asked questions.
Then I opened Claude and described what I wanted to build based on that architecture. Watched AI generate the skeleton - database schema, API endpoints, basic logic. Took maybe 10 minutes. I reviewed it, caught a few issues with the error handling, adjusted the prompt, regenerated. Another 5 minutes.
Spent the next hour not writing code, but orchestrating. Used Claude again to generate tests. Used another AI to write documentation. Used Copilot to fill in some business logic that needed context from other parts of the codebase. My job was directing these different AI systems, making sure they were all working toward the same goal, maintaining consistency.
Found a bug? Described the problem to Claude, it generated a fix, I reviewed it, made a small adjustment, done. Five minutes total. Two years ago, I might have spent an hour debugging this, adding print statements, rerunning tests.
Then came the hard part: reviewing my teammate's code. Not code they wrote by hand, but code their AI had generated. I spent two hours on this. Not because the code was wrong, but because I had to understand the implications. Does this approach scale? Is it secure? Will we be able to maintain this next month when we add more features? Does it fit our architecture? These questions take time and expertise to answer.
Afternoon was a team meeting about our project roadmap. We discussed upcoming features and pushed back on complexity. Another meeting to design how our different components would integrate. Not a line of code written in any of these meetings, but these meetings determined what code would get written and why.
End of the day: my professor wanted to see if a certain feature was feasible for our project scope. I used Claude to generate three different approaches in about 30 minutes. Showed them to my team, got feedback, iterated. By the end of an hour, we had a working prototype that answered the question. In 2022, this would have taken me three days. Now it's an afternoon.
And here's the thing that keeps me up at night: I'm still a student. I'm supposed to be learning. But what am I learning? I'm not grinding through implementation problems. I'm not building that deep muscle memory of how systems work from the ground up. I'm learning to be an orchestrator, a reviewer, a prompt engineer.
Is that enough? When I graduate, will I have the judgment that comes from years of making implementation mistakes? Or will I be able to direct AI effectively but lack the foundational understanding that makes that direction meaningful?
I talk to my friends who graduated two or three years ago, who spent their first year fixing bugs and implementing features by hand, and they have instincts I don't have. They can look at code and feel when something's wrong, even if they can't immediately articulate why. They built that through repetition, through making mistakes and fixing them, through the grunt work I'm skipping.
That's what an AI-native engineer does. We orchestrate. We judge. We review. We design. We prototype at speeds that were impossible before. We spend less time fighting with syntax and more time thinking about systems.
The work is more cognitive and less manual. More strategic and less tactical. More about understanding and judgment and less about implementation and execution.
Is it better? I don't know. Is it easier? Absolutely not - it's harder, because judgment is harder than execution. But it's what the job is becoming. And the engineers who will thrive are the ones who accept this and develop these new skills, rather than clinging to the old ones.
But I still wonder: can you learn judgment without first learning execution? Can you skip the struggle and still gain the wisdom? Or are we creating a generation of engineers who can direct AI beautifully but don't understand what we're directing it to do?
New Technical Frontiers
But here's what surprised me: even as implementation becomes automated, new technical challenges are emerging. They're just different challenges.
Understanding AI capabilities and constraints is now a core engineering skill. You need to know what models are good at, what they struggle with, what causes them to hallucinate. You need to understand context windows, token limits, when to use which model for which task. This is specialized knowledge that takes time to develop.
I've become something like an AI whisperer. I know that Claude is better at architectural thinking but sometimes overthinks simple problems. I know that Copilot is fast but sometimes suggests outdated patterns. I know when to give more context and when less is more. This knowledge is valuable because it makes me more effective at using these tools.
Security review of AI-generated code has become its own discipline. We're developing patterns and checklists: Did the AI properly validate inputs? Did it handle authentication correctly? Are there any hardcoded secrets? Does it sanitize user data? It's not enough to know the code works - you need to know it works safely.
System architecture in an AI-augmented world is different too. When you can generate microservices quickly, the temptation is to generate too many. When you can add features fast, the temptation is to add too many features. Architecture becomes about restraint, about saying no, about keeping things simple even when complexity is easy.
One of our senior architects told me: "My job used to be designing systems that developers could implement. Now it's designing systems that developers and AI can maintain." That's a different challenge. It requires thinking about clarity and simplicity in new ways.
Performance optimization hasn't gone away - it's just changed. AI-generated code isn't always optimal. It works, but it might not work efficiently. I spend time now profiling AI-generated code, identifying bottlenecks, understanding why the AI chose certain approaches. Sometimes I regenerate with better prompts. Sometimes I optimize by hand. But I need to understand performance at a deep level to know when and how to intervene.
And then there's the meta-problem: understanding when not to use AI. Some problems are still better solved by thinking deeply and writing carefully. Some code needs to be handcrafted because it's so critical, so performance-sensitive, so security-conscious that AI isn't trustworthy enough yet.
Knowing the difference - knowing when to augment your work with AI and when to set it aside - that's judgment again. And it's not something you can learn from a tutorial. It comes from experience, from making mistakes, from seeing what works and what doesn't.
The technical frontier hasn't disappeared. It's just moved. We're not fighting with compilers anymore. We're wrestling with how to effectively collaborate with AI, how to maintain quality at unprecedented speed, how to build systems that are simple enough to understand but powerful enough to matter.
These are new problems. Hard problems. The kind of problems that can't be solved by just being good at writing code. They require a different kind of technical excellence - one that we're all still figuring out together.
Who Survives?
The Narrowing Gateway
I have a younger friend who's a sophomore studying computer science. Bright kid, genuinely passionate about programming, the kind of person who codes for fun on weekends. Last month, they asked me for advice about internships.
I didn't know what to tell them.
The internships that existed when I was a sophomore - the ones where you'd spend a summer fixing bugs, writing tests, implementing small features under supervision - those are vanishing. The companies that used to hire 20 interns and convert 10 of them to full-time now hire 5 interns and convert 2. And those 5 aren't normal students. They're the ones with viral GitHub projects, competition wins, research publications. The exceptional ones.
My friend is good. But they're not exceptional. In 2020, good would have been enough. In 2025, I'm not sure it is.
Here's the catch-22 that's breaking the traditional path into engineering: you need experience to get hired, but the jobs that give you experience don't exist anymore. Entry-level positions vanished because AI does entry-level work. Companies don't need someone to write boilerplate, fix simple bugs, or implement straightforward features. AI handles that faster and cheaper.
But without those entry-level positions, how do you develop the judgment, the intuition, the pattern recognition that makes you valuable? You can't learn to be a senior engineer without first being a junior engineer. But if no one hires juniors, where do seniors come from?
The industry hasn't figured this out yet. We're running on momentum, hiring from the pool of engineers who got in before AI, who learned the traditional way. But that pool is finite. In five years, when we need to hire senior engineers, where will they have come from? Who will have trained them?
Right now, the only people breaking into engineering are the exceptional ones. The self-taught prodigies who built something impressive enough to prove they don't need traditional training. The students from top schools with multiple internships and standout projects. The career changers who bring valuable domain expertise from another field.
If you're just good - just competent, just willing to learn - there's no obvious path anymore. The gateway narrowed from a door to a crack, and only the thinnest can slip through.
I think about this a lot because I got in right before the door closed. If I were applying now, with my actual skills and portfolio, would I make it? I honestly don't know. And that terrifies me, not just for myself, but for everyone like my friend who's doing everything right but might not get a chance.
The Senior Boom
Meanwhile, if you're already senior, you've never been more valuable.
A friend who's a senior engineer at a fintech company just got three competing offers, each one offering compensation that would have seemed absurd three years ago. They weren't even looking. Recruiters found them. Because companies are desperate for senior talent in a way they've never been desperate for junior talent.
Here's why: a senior engineer with AI tools is a force multiplier in a way that was never possible before. They can accomplish what used to take a whole team. They have the judgment to direct AI effectively, the experience to catch its mistakes, the intuition to know when to trust it and when to verify. They can move fast without breaking things.
The economics are simple and brutal: one senior engineer with Claude and Cursor can outproduce five junior engineers without AI. They're faster, they make better decisions, they avoid costly mistakes, and they don't need supervision. From a company's perspective, why hire five juniors when you can hire one senior?
This isn't theoretical. I've watched it happen. Teams are shrinking but becoming more effective. The engineers who remain are more senior, more expensive, and more productive than ever. Companies are competing viciously for these people because they're the bottleneck now. Not implementation capacity - you can generate infinite code. The bottleneck is judgment, and judgment comes from experience.
Senior engineers I know are finally getting the respect and compensation they deserve. They're being included in strategic decisions, given more autonomy, trusted to make calls that affect the whole company. It's what many of them always wanted - to be valued for their thinking, not just their typing.
But there's a dark side to this boom: it's unsustainable. Today's senior engineers learned by being yesterday's junior engineers. If we stop hiring juniors, where do tomorrow's seniors come from? The industry is eating its seed corn, burning through experienced talent while failing to grow more.
Every senior engineer I talk to recognizes this problem. They remember being junior, remember learning from repetition and mistakes, remember the patient seniors who mentored them. They want to pay it forward. But how do you mentor someone when there's no one to hire? How do you train juniors when there are no junior positions?
The boom feels good if you're senior. But it feels hollow too, because we can see the cliff we're walking toward.
The Missing Middle
The people no one talks about are the mid-level engineers. Not senior enough to be force multipliers. Not junior enough to be obviously obsolete. Just… stuck.
I know someone who's been a mid-level engineer for four years. Solid contributor, knows their stack, ships features reliably. The kind of engineer who forms the backbone of most teams - not the superstars, but the steady, dependable ones who keep things running.
Last month, they asked their manager about promotion to senior. The manager was honest: "Your technical execution is great, but we need strategic thinking. We need someone who can design systems, make architectural decisions, drive technical direction. Can you do that?"
They weren't sure. Because they'd spent four years implementing features that others designed. They were good at execution, which used to be enough. Now execution is commoditized, and they're competing with AI for relevance.
This is the squeeze. Mid-level engineers who built their careers on technical execution are finding that execution isn't valuable anymore. They need to evolve into strategic thinkers, into leaders, into the kind of engineers who provide judgment and direction. But that transition is hard, and not everyone can make it.
The new career path looks less like a ladder and more like a cliff with a few handholds. You either climb to senior - developing judgment, strategic thinking, leadership - or you plateau. There's no comfortable middle anymore where you can be a solid implementer for a decade. Implementation is free.
Some mid-level engineers are adapting beautifully. They're learning to think architecturally, to ask better questions, to provide the judgment that AI can't. They're becoming the seniors of tomorrow. But others are struggling, caught between the skills they have and the skills they need, unsure how to make the leap.
And the timeline is brutal. You can't take years to gradually develop strategic thinking. The transformation is happening now. Teams are restructuring now. The engineers who don't adapt quickly risk being left behind - not fired necessarily, but stagnant, watching opportunities go to those who evolved faster.
The path from junior to senior used to take 5–7 years of steady progression. Now it's more like: exceptional to get in, then 2–3 intense years to prove you can think strategically, or you're stuck. The middle rungs of the ladder are disappearing, leaving only the bottom (which you can't reach) and the top (which you need to reach quickly).
It's Darwinian in a way the industry has never been before. Not survival of the fittest, but survival of the most adaptable. The engineers who can evolve their skills as fast as the tools are evolving will thrive. The rest… I don't know what happens to the rest.
What This Means For…
Current Students and Bootcamp Grads
I need to be brutally honest with you: the path I took into engineering probably doesn't exist anymore. I don't say this to discourage you, but because false hope is worse than hard truth.
The entry-level positions that used to absorb new grads - they're mostly gone. The bootcamp-to-junior-dev pipeline - it's broken. If you're counting on doing a bootcamp, building a portfolio, and landing a job, you need a backup plan. That path worked for thousands of people. It might not work for you.
So what do you do?
First: specialize early and go deep. Don't try to be a generalist who knows a bit of everything. Pick a domain that matters - AI/ML, security, infrastructure, data engineering - and become genuinely knowledgeable. Companies will hire inexperienced people if they bring specialized expertise that's hard to find.
Second: build in public. Don't just build projects - build projects that other people use, that solve real problems, that demonstrate judgment and taste. Your GitHub shouldn't show that you can code (AI can code). It should show that you can identify problems worth solving and solve them in thoughtful ways.
Third: focus on problems, not code. Learn to think like a product engineer, not a code monkey. Understand users, understand businesses, understand why things get built. The engineers getting hired now are the ones who can think strategically from day one.
But also, consider alternative paths. Technical product management might be more accessible than engineering. Developer relations and advocacy need people who understand code but spend time on communication. AI training and evaluation need people who can assess code quality. Technical writing needs people who can explain complex systems.
These aren't consolation prizes. They're legitimate careers that value similar skills. And they might be more accessible right now than traditional engineering roles.
If you're currently in school: take systems classes seriously, learn to think architecturally, develop taste and judgment, study the business side of tech. You're not learning to write code anymore - you're learning to think about software. Big difference.
If you're considering a bootcamp: do your research. Talk to recent grads. Look at actual placement rates, not marketing materials. Understand that you're competing with AI for the same roles bootcamps traditionally filled. That doesn't mean don't do it, but go in with open eyes.
And if you're struggling to break in: it's not your fault. The industry changed faster than anyone expected. That doesn't make it fair. But understanding what changed might help you adapt.
Working Engineers
If you're already employed as an engineer, you're in a better position than those trying to break in. But don't get comfortable. The skills that got you here might not be the skills that keep you relevant.
Here's what matters now:
Embrace AI tools fully. Not reluctantly, not with one foot in the past, but completely. Learn to use them well. Experiment with different tools. Develop intuition for when to use AI and when to think independently. The engineers who resist AI will become gradually less competitive than those who master it.
Develop judgment deliberately. You can't coast on technical execution anymore. Practice thinking strategically: Why are we building this? What's the simplest solution that could work? What are the long-term consequences? What are we optimizing for? These questions need to become second nature.
Think like a product engineer, even if that's not your title. Understand the business. Talk to users. Learn what drives value. The engineers who understand both technology and business are the ones who'll be indispensable.
Build a moat around your expertise. What do you know that AI can't replicate? What context do you have that's valuable? What relationships have you built? What judgment have you developed? That's your defensible position.
Continuous learning is no longer optional. The tools change every month. Best practices evolve constantly. What worked last year might be obsolete today. Set aside time every week to learn, experiment, stay current.
And mentor, if you can. If you're senior enough to have junior engineers, invest in them. Yes, there are fewer of them. Yes, they might not stay long. Do it anyway. The industry needs to figure out how to train the next generation, and that starts with individuals who care enough to try.
If you're mid-level and feeling stuck: the time to develop strategic thinking is now. Take on projects that require architectural decisions. Volunteer to design systems. Practice articulating why things should be built certain ways. Make the leap to strategic thinking before you're forced to.
Companies and Engineering Leaders
If you're hiring engineers or leading engineering teams, you're navigating unprecedented change. The old playbooks don't work. You need new ones.
Rethink your team structure. The traditional pyramid - many juniors, fewer mid-level, a handful of seniors - doesn't make sense anymore. You might need an inverted pyramid: mostly seniors, a few mid-level, almost no juniors. That has cost implications, but it might be more effective than the alternative.
Invest heavily in senior talent. Yes, they're expensive. They're worth it. One great senior engineer with AI tools can accomplish what used to take a team. And they'll make better decisions, which compounds over time.
But also think about sustainability. If you stop hiring juniors entirely, where will your future seniors come from? Maybe you need a small apprenticeship program - not traditional junior roles, but structured learning where a few exceptional people can develop judgment under close mentorship. It's expensive and slow, but it's investment in your future pipeline.
Consider hybrid roles. Maybe you need people who are half product manager, half engineer. Or half engineer, half technical writer. The boundaries are blurring. Be open to that.
Evaluate engineers differently. Technical interviews that test coding speed and algorithm knowledge are less relevant. You need interviews that test judgment, strategic thinking, ability to work with AI, architectural intuition. That's harder to assess, but it's what actually matters now.
And think carefully about your relationship with AI tools. Which ones do you adopt? How do you govern their use? How do you ensure quality when code is generated so quickly? How do you maintain security? These are new problems that need thoughtful solutions.
The Industry at Large
Here's the question that keeps me up at night: if we stop hiring juniors, who trains the next generation of seniors?
The industry is running on momentum right now. We're hiring from the pool of engineers who learned the traditional way - who spent years doing implementation work, making mistakes, developing judgment. But that pool isn't renewing itself. We're consuming experienced talent faster than we're creating it.
In five years, when today's seniors retire or move to management, where will we find their replacements? You can't just skip the learning phase. Judgment comes from experience, and experience takes time. If we don't provide opportunities for people to gain experience, we're setting up a crisis.
Some potential solutions:
Apprenticeship models, where companies invest in training a small number of people intensively. Expensive, but necessary. Think of it less like hiring and more like growing your own talent.
AI-assisted mentorship programs, where people learn by working closely with AI and experienced engineers together. Maybe AI can help compress the learning timeline while senior engineers provide the judgment that AI lacks.
New educational models that teach strategic thinking and judgment from the start, rather than focusing on syntax and implementation. Maybe we need engineering schools that look more like business schools - case studies, strategic analysis, decision-making under uncertainty.
Or maybe we need to accept that engineering will become a smaller, more elite profession. Fewer people, higher barriers to entry, more concentration of talent and compensation. That has social implications we should think about carefully.
There's also a fairness question. Software has been one of the most accessible paths to the middle class over the past two decades. Smart, motivated people from any background could learn to code and build solid careers. If that opportunity is closing, if engineering becomes only accessible to the exceptional or the already-privileged, we're losing something valuable.
I don't have solutions. I'm not sure anyone does yet. But these are questions the industry needs to grapple with, soon, before the crisis becomes acute.
Long Live Engineering
I started this article angry and grieving. I'm ending it uncertain but hopeful - or maybe hopeful despite the uncertainty.
Engineering as we knew it is dead. The craft of writing code line by line, the career ladder of steady progression, the clear identity of what an engineer does - all of that is gone or going. And I'm allowed to mourn it. You're allowed to mourn it too.
But something is being born in its place. Something that might actually be closer to what engineering was always supposed to be, before implementation took over.
The best engineers I've known were never the ones who wrote the cleverest code. They were the ones who asked "why" before "how." The ones who understood what problem they were actually solving. The ones who could look at a complex system and see the simple solution. The ones whose judgment you trusted.
Implementation obscured this truth because it was time-consuming and necessary. We conflated typing with thinking. But now that typing is free, we're forced to confront what engineering actually is: the application of judgment and creativity to solve meaningful problems.
That's harder than coding ever was. It requires you to be better at thinking, better at communicating, better at understanding humans and businesses and systems. It requires wisdom, not just knowledge. Experience, not just skill.
And yes, it's more exclusive. Yes, it's harder to break into. Yes, the transformation is brutal for people at certain stages of their careers. I'm not pretending otherwise. But the solution isn't to reject AI or cling to the past. The solution is to figure out how to develop judgment and strategic thinking faster, how to train the next generation despite the changed landscape, how to build a sustainable path forward.
For those of us already in the field: this is a call to evolve. Embrace the tools. Develop judgment deliberately. Think strategically. Learn continuously. Mentor when you can. Adapt or become irrelevant - it's that simple and that harsh.
For those trying to break in: it's harder now, but not impossible. Specialize deeply. Build things that matter. Develop strategic thinking early. Prove you can provide judgment, not just execution. Be exceptional, because good isn't enough anymore.
For the industry: we need to solve the sustainability problem. We need to figure out how to train future engineers when the traditional training ground has disappeared. We need to think beyond quarterly profits to long-term ecosystem health.
The transformation is happening whether we like it or not. AI isn't going away. It's not going to stop improving. The genie doesn't go back in the bottle. So we have two choices: resist and become obsolete, or adapt and figure out what we're becoming.
I'm choosing to adapt. Not happily - I'll always miss the craftsmanship, the flow state, the satisfaction of building something from nothing with just my mind and my hands. But I'm accepting that this is the world now.
Engineering isn't dying. It's transforming into something that demands more of us, not less. More strategic thinking. More judgment. More wisdom. More humanity, actually - because what's left after AI handles implementation is the distinctly human work of deciding what's worth building and why.
In eliminating the mundane, AI is revealing what engineering was always supposed to be: the thoughtful, strategic, creative application of technology to meaningful problems. We just got distracted for a few decades by the mechanics of it all.
The king is dead. Long live the king.
And maybe, just maybe, the new king is better than the old one. We'll find out together.
Oh, you bore with me to the end? Then do not be surprised when you hear that this article is drafted by me and written with the help of an AI model. The evidence is everywhere around you. Raise your head, accept the truth.
Top comments (0)