DEV Community

Shrijith Venkatramana
Shrijith Venkatramana

Posted on

AI Is Stress-Testing Software Engineering as a Profession

The arrival of large-scale AI systems has triggered two predictably disappointing reactions among a large swath of software engineers: hype and fear.

One group greets AI with unrestrained excitement, extrapolating demos into destiny. This position is usually held by the less experienced or by those with commercial interests tied to the narrative

Another treats it as an existential threat, assuming wholesale displacement and professional irrelevance. This reaction often comes from more experienced engineers who have grown comfortable in a particular context and are primarily focused on protecting their existing paycheck.

Both responses, in my view, are mistakes. They share a common flaw: neither is a professional response.

Professions are defined not by the tools they use, but by the responsibilities they accept.

Medicine as a field of practice did not go extinct when diagnostic machines improved.

Civil engineering did not dissolve when materials science advanced.

In each case, the profession adapted by elevating standards of judgment, verification, and accountability.

Software engineering is now being asked to do the same.

This moment is not about defending status or losing one’s bearings due to excessive exuberance. AI is an opportunity for the field of Software Engineering to grow up.

The Failure of Hype and Fear

Hype is a failure of epistemic, mental and professional discipline.

It treats impressive behavior under controlled conditions as proof of general reliability.

It confuses surface fluency with understanding and mistakenly assumes that systems somehow work out OK on their own.

Engineers who have never owned production systems are especially vulnerable to this error, because they have not yet internalized how often “working” systems fail in unexpected ways.

Fear is a different failure, but a failure nonetheless.

It assumes that because a tool can perform a task, it can absorb responsibility for that task.

This is a category error.

Tools do not bear accountability. People do.

When engineers retreat into narratives of irrelevance under the banner of realism - they are in practice abandoning professional duty.

Neither posture helps society.

Neither deserves trust.

What a Professional Response Looks Like

A professional response begins with responsibility.

Engineers remain responsible for the behavior of systems they design, deploy, and maintain, regardless of how much automation is involved.

“The model did it” is not an explanation; it is an abdication.

If a system cannot be explained, evaluated, or bounded, it should not be deployed in contexts where failure matters.

Professionals are also defined by epistemic discipline.

They distinguish what is known from what is uncertain, what has been validated from what merely appears to work.

They resist both excitement and panic because both distort judgment.

Calm assessment under uncertainty is not optional; it is the core professional skill.

Verification becomes more important, not less.

As generative systems make production cheaper, review, checking, auditing, and adversarial testing become the scarce skills.

Creation can be automated. Validation cannot be outsourced to systems that do not understand consequences.

Only those who understands the dynamics and history of a system end-to-end can vouch for it competently.

Authority, when exercised, must be tied to truth rather than status.

Professionals do not claim legitimacy by insisting on uniqueness, nor do they forfeit it by denying their value.

They earn authority by explaining limits clearly, correcting errors publicly, and refusing to overstate confidence.

Credibility comes from accuracy, not bravado.

Finally, a professional response is system-level.

Engineers must reason about sociotechnical systems: incentives, organizations, feedback loops, and long-term maintenance.

They ask uncomfortable questions, such as who pays when the system fails and how errors propagate beyond the technical boundary.

This is the difference between building artifacts and practicing engineering.

The Training That Produces Professionals

Professional judgment does not emerge from tool fluency or short courses alone.

It is cultivated through specific forms of training and experience.

The first is production responsibility.

There is no substitute for owning systems that can fail, being paged when they do, and living with the consequences of design decisions.

This experience teaches restraint, humility, and realism.

It is why experienced engineers react differently to AI hype than those who have only shipped prototypes.

Second is rigorous reasoning.

Training in logic, probability, statistics, and failure analysis and many other topics, including the liberal arts - produces a human being capable of taking on real responsibility.

Professionals must train their thought, emotion, speech and actions to serve with competence.

Professionals ask what assumptions are being made, how brittle those assumptions are, and what evidence would falsify their beliefs.

Without this discipline, AI becomes a baseless confidence amplifier rather than a tool.

Third, writing must be treated as a core technical skill.

Design documents, postmortems, risk analyses, and model evaluations are not administrative overhead; they are how thinking is made inspectable.

Writing forces clarity and exposes gaps in understanding.

Managing AI systems is largely a writing problem: constraints, evaluations, prompts, and decision rationales are textual artifacts.

Fourth, professionals study failure.

Engineering history is rich with disasters caused not by a lack of intelligence, but by overconfidence, normalized deviance, and misaligned incentives.

These patterns repeat.

Engineers who do not study them are condemned to reenact them with more powerful tools.

Fifth, training must include organizational, political and human factors.

Most large failures are not purely technical.

They arise from communication breakdowns, incentive misalignment, mismanagement of various resources, and diffusion of responsibility.

Engineers who ignore these factors are operating with an awfully inaccurate model of reality.

Finally, professional formation requires an ethos rather than a mere “We’re special” identity.

The question is not whether engineers are special.

The question is whether they are worthy of trust in important and complex matters at hand.

Trust is earned through clarity, restraint, and accountability over time, not through novelty or self-assertion.

The Task Before Us

AI does not eliminate software engineering as a profession, although it demands introspection, reforms, evolution in it.

It merely removes the comfort of immaturity.

It makes excuses harder to sustain and sloppy thinking more visible.

In exchange, it raises the ceiling for those willing to accept responsibility and exercise judgment.

Software will still be required in the world - but the way it gets created and handled may change drastically in the upcoming months and years.

Growing up will not feel glamorous. It will involve less excitement, less fear, more verification, and more uncelebrated work.

But that is the price of being a profession rather than a trade, and of being trusted rather than merely used. A “coder” is a tradesman (output-focused). An “engineer” is a professional (outcome-focused, risk-bearing). AI may automate the trade, but it stresses the profession.

This is the test. How software engineering responds will determine whether it matures— or forfeits its claim to authority altogether.

Top comments (0)