DEV Community

Brian Ting
Brian Ting

Posted on

Software Engineering in 2026: A View From the Server Room

I read Ben Congdon's article "Software Engineering in 2026" and found myself nodding along to many of his points. But I also noticed something: the conversation around AI and software engineering often feels like it's happening in a different world from mine.

I'm a Full-Stack Engineer managing distributed systems across 15+ production sites in Malaysia. My daily work isn't only about building shiny new features, but also about keeping things running, handling database migrations at 2 AM, and making sure our 99%+ uptime stays that way. I am basically wearing multiple hats: writing specs, developing products, code review, deliver products, and ensure everything works without issue.

Ben's article got me thinking: What does the AI revolution in software engineering look like from the server room?

Where I Agree: AI Has Changed the Game

Let me be clear first. AI tools have genuinely changed how I work. The "marginal cost of producing high quality code has gone down," as Ben puts it. This is real.

Here's where I've seen the biggest impact in my daily work:

Writing boilerplate code. Setting up a new FastAPI endpoint with proper validation, error handling, and documentation? What used to take 30 minutes now takes 5. AI handles the repetitive parts while I focus on the business logic.

Code reviews for patterns. When reviewing AI-generated code, I've found that AI is surprisingly good at following established patterns. It rarely invents weird solutions when you give it clear examples.

Documentation. Writing docstrings, API documentation, and README files used to feel like a chore. Now it's almost automatic. I review and edit, but the heavy lifting is done.

So yes, the productivity gains are real. But here's where my experience starts to differ from the mainstream narrative.

The Part Nobody Talks About: Operating Systems

Ben makes an important observation that I wish more people would pay attention to:

"Operating" systems, from what I've seen, has for now been least impacted by LLMs.

This matches my experience exactly. And I'd argue this is the most important part of software engineering that gets overlooked in AI discussions.

When you manage 15+ production sites, you learn quickly that writing code is just the beginning. The real challenge is what comes after. Deployment. Monitoring. Debugging production issues at scale. Handling that one edge case that only appears when three specific conditions align during peak hours.

AI can help me write a database migration script. But can it tell me whether I should run that migration during lunch hour or after midnight? Can it predict how the migration will affect query performance across sites with different data distributions? Can it handle the rollback if something goes wrong?

Not yet. Maybe not for a while.

What Production Experience Teaches You

There's a kind of knowledge you only get from keeping systems alive. Let me share some examples.

Database decisions compound over time. Last year, I made a choice about how to structure a particular table. It seemed like a small decision then. Few months later, that decision affected query performance when the product scales up. AI can suggest schema designs, but it doesn't understand your specific data access patterns, growth projections, or the quirks of your production workload.

Network reality differs from theory. I work in Sarawak, where network conditions can be... interesting. Our systems need to handle unstable connections, high latency, and occasional outages. This isn't in any training data. You learn it by debugging failed requests at 3 AM and building systems that gracefully degrade.

Uptime requires human judgment. When something breaks in production, the first question isn't "what's the fix?" It's "how bad is this?" You need to decide: Do we roll back immediately? Can we apply a quick patch? Should we wake up other team members? These decisions require understanding the business impact, not just the technical problem.

Why Backend Work Feels Different

Ben mentions that "LLMs seem to grok frontend particularly well." I've noticed this too, and I have a theory about why.

Frontend work often has immediate, visible feedback. You change the code, you see the result. This makes it easier to iterate with AI assistance. You can quickly verify if the AI's suggestion works.

Backend work is different. The consequences of your decisions often don't show up immediately. A poorly designed API might work fine today but cause scaling problems in six months. A database index choice might seem irrelevant until you have 10 million rows. A caching strategy might work in development but fail under production load.

This delayed feedback loop makes AI assistance trickier. By the time you realize the AI's suggestion was suboptimal, you've already built a lot on top of it.

The Skills That Matter More Now

So what should backend engineers focus on in 2026? Based on my experience, here's what I'm investing in:

System design and architecture. Ben talks about "crisp human-guided abstractions," and I think this is exactly right. The ability to design systems that are maintainable, scalable, and clear becomes more valuable when code is cheap. Anyone can generate code. Not everyone can design systems that work well over time.

Operational excellence. Understanding monitoring, observability, incident response, and debugging in production. These skills are hard to automate because they require deep context about your specific systems and business needs.

Understanding trade-offs. Every technical decision involves trade-offs. Consistency vs. availability. Performance vs. maintainability. Speed vs. reliability. AI can list options, but understanding which trade-off is right for your situation requires experience and judgment.

CI/CD and deployment practices. Ben emphasizes this too, and I agree completely. When AI is generating more code, your ability to test, validate, and safely deploy that code becomes critical. Investing in solid CI/CD infrastructure pays off even more now.

The "Build vs. Operate" Gap

Here's something I think about a lot. Ben asks whether falling code costs will shift the "build vs. buy" calculation. He notes that operating costs haven't fallen the way development costs have.

This creates an interesting situation. It's easier than ever to build something. It's not any easier to run it reliably in production. This gap will only grow.

I expect this means operational expertise becomes more valuable, not less. Companies can spin up new systems faster, but they still need people who can keep those systems running. If you're a backend engineer, this is good news. The skills you've built around production reliability aren't being automated away.

Looking Forward

I don't want to sound like I'm against AI tools. I use them daily. They've made me more productive. But I think we need a more balanced conversation about where AI helps and where human expertise still matters.

Software engineering has always been about more than writing code. It's about understanding problems, making trade-offs, and building systems that work in the real world. AI is changing how we write code, but it hasn't changed those fundamentals.

For those of us working in backend and infrastructure, the message I take from 2025 going into 2026 is this: Keep building your operational skills. Keep learning about system design. Keep gaining production experience. These skills are becoming more valuable, not less.

And if you're managing distributed systems across multiple sites while AI handles some of your boilerplate code? You're in a pretty good position for whatever comes next.


What's your experience with AI tools in backend or infrastructure work? I'd love to hear from others who work in operations-heavy roles. Drop a comment below.


Top comments (0)