DEV Community

Luke Tong
Luke Tong

Posted on

Rethinking the software engineer's role in the era of LLMs, agents, and intelligent automation

As AI tools like Copilot and agent-based frameworks evolve, I’ve noticed something quietly shifting in my day-to-day workflow. I no longer see myself as someone who writes code line-by-line. Instead, I break down my objectives into well-defined tasks, hand them off to an agent, and then—critically—I review the outcome.

In other words, my role is transforming: from engineer-as-implementer to engineer-as-orchestrator.


The Core Skills Are Still Essential

Despite AI’s ability to generate high-quality code (sometimes better than mine), I’ve learned this doesn’t absolve me from needing strong fundamentals. On the contrary:

  • I still need to perform micro-decisions.

    Even the best-generated code often requires judgment calls on architecture, trade-offs, and edge cases.

  • I need to spot AI hallucinations.

    LLMs can confidently produce subtle but critical logical errors. Recognizing them is a senior engineer’s job.

  • I need to deeply review what I didn't write.

    If I don't understand the code, I can't guarantee its correctness—or maintainability.


The Paradox: Great Code That’s Incomprehensible

Sometimes the code produced by my AI assistant is too good. It’s elegant. Modular. Efficient. But so abstract or deeply nested that it becomes difficult for a human engineer—even an experienced one—to fully grasp in a short time.

Now here’s the problem: if such “perfect-looking” code contains a tiny mistake, the resulting bug becomes both unpredictable and hard to trace. Worse still, most debugging tools aren’t designed to help you untangle code you didn’t write.

That’s not just a problem. That’s a production risk.


Rethinking Interviews: Prompt Engineering > Syntax

If I were hiring today, I wouldn’t focus on whether a candidate knows every language feature. Instead, I’d ask:

  • Can you break down an ambiguous, multi-part requirement into manageable components?
  • Can you design high-leverage prompts to get useful results from an agent?
  • Can you architect systems where humans and agents collaborate meaningfully?

In that sense, prompt literacy is becoming just as important as code literacy.


The Road Ahead: Human-in-the-Loop Programming

We’re not moving toward a future where developers become obsolete.

We’re moving toward one where developers become more human:

  • Less typing, more thinking
  • Less repetition, more orchestration
  • Less syntax, more semantics

I don’t know where AI will take us next. A year ago, the tools I use today didn’t even exist. But I do know this:

I no longer measure myself by how much code I can write. I measure myself by how well I can direct, verify, and refine what AI writes.

That’s the new senior engineer mindset.


Final Thought

“It’s not just about writing code anymore. It’s about managing meaning across agents, tools, and human intent. That’s engineering at the next level.”

Top comments (0)