DEV Community

Cover image for When CI/CD Becomes the Goal: The Quiet Erosion of Engineering Ownership
Leon Pennings
Leon Pennings

Posted on • Originally published at blog.leonpennings.com

When CI/CD Becomes the Goal: The Quiet Erosion of Engineering Ownership

Software delivery has become one of the most ritualized practices in modern development.

Pipelines are longer.

Checks are stricter.

Deployments are more automated.

Dashboards are greener than ever.

Yet in many teams, software has not become more engineered.

It has become more processed.

That distinction matters.

CI/CD was never intended as an excuse to pile machinery on top of weak engineering. It started as a practical response to real problems. But somewhere along the way, much of the industry stopped using it to support strong engineering and began using it to compensate for its absence.

That is where things quietly went wrong.


What CI Originally Solved

The original idea behind Continuous Integration was straightforward.

It was never primarily about pipelines, YAML, or branch policies. It was about forcing reality into the room early.

Developers were expected to integrate frequently — often daily — into a shared codebase. The goal was simple: prevent teams from drifting into parallel worlds and discovering too late that their work didn’t fit together.

That solved a real problem.

Frequent integration forced teams to confront overlap, collisions, ambiguity, and unintended coupling while the cost of correction was still low. But CI did something subtler and arguably more important: it reinforced the team while development was still happening.

Developers didn’t merely discover each other’s work after the fact. They had to continuously adapt to one another’s choices, assumptions, and interpretations of the system in the moment. That pressure was not a flaw. It was the point.

This is how engineering sharpens itself — not by letting everyone disappear into isolated implementation tunnels and comparing answers at the end, but by shaping and correcting each other during the act of construction. Real engineering teams do not just divide work. They reinforce shared understanding.

Original CI made integration a living team concern rather than a delayed administrative event.

That was healthy engineering.


What Continuous Delivery Originally Solved

Continuous Delivery was aimed at a different concern than CI.

Not integration itself, but the path from integrated code to running software.

And to be fair, that was not a fake concern.

But it also was not universally the disaster modern delivery culture sometimes pretends it was.

In many Java systems, deployment was already fairly boring. An application server was stopped, a WAR or EAR was replaced, the instance was restarted, and the system was verified. That was not always elegant, but neither was it some fundamental engineering crisis.

So the real value of CD was not that it magically solved an impossible deployment problem.

Its promise was narrower and more practical: to make the release path more repeatable, more standardized, less person-dependent, and easier to execute consistently across teams and environments.

That is a reasonable goal.

And in some environments, it becomes more than reasonable — it becomes necessary.

Once deployments span multiple machines, rolling restarts, clustered services, or orchestrated server fleets, manual deployment stops being merely inconvenient and starts becoming operationally impractical. At that point, automation is not theater. It is simply the sane way to move software safely and consistently.

That is where CD has real value.

But not all release friction was technical.

In many organizations, a significant part of the “deployment problem” came from the surrounding structure itself: separate infrastructure departments, ticket-driven handoffs, release scheduling rituals, and operational processes that turned even simple deployments into expensive coordination exercises.

That pain was real — but it is important to name it accurately.

Often, the difficulty was not in replacing the software.

It was in navigating the organization around it.

Modern delivery automation did remove a great deal of that friction.

But in many cases, the underlying pattern did not disappear. It simply moved.

Where infrastructure teams once controlled servers and release windows, platform and pipeline teams now increasingly control the mechanics of delivery itself. The form changed. The separation often did not.

And that matters more than it first appears.

Because once the release path is defined by people who do not carry the semantic or business consequences of the software, the pipeline can quietly become a surrogate for ownership.

That is where the trade-offs began.


Where It Started to Go Wrong

The issue is not that CI/CD solved fake problems. The issue is that much of the industry adopted the tooling and rituals while quietly abandoning the engineering assumptions that gave those practices their value.

Once that happened, CI/CD stopped reinforcing good engineering and started compensating for weak engineering instead.

A lot of what now passes for “CI” is no longer continuous integration.

It is deferred reconciliation.

Developers work in isolation on long-lived branches, treating the merge as the first serious moment of contact with the rest of the system. The pain that CI was designed to expose early is now allowed to accumulate until the branch is “ready.” The pipeline creates the illusion of discipline, but the underlying practice has shifted.

The old model forced developers to adapt to each other continuously.

The modern branch-heavy model lets them adapt only at the end.

What makes this regression more serious is that it did not happen accidentally. In many teams, CI was gradually reshaped to serve a different goal: continuous deployment of independently developed changes.

That sounds efficient, but it came with a structural trade-off.

In order to deploy “each feature” continuously, work first had to become isolatable. That pushed development toward branch-based workflows, delayed integration, and feature-level thinking. The unit of progress stopped being the continuously evolving shared system and became the individually shippable change.

And once that shift happened, CI changed with it.

What used to be immediate feedback on a real check-in against the shared codebase became a staged validation process around isolated work. The branch is tested. The pull request is reviewed. The pipeline is green. But the fully integrated system — in motion, under changing conditions, with multiple real changes meeting each other — is often encountered meaningfully much later.

That is not a small process adjustment.

It is a relocation of feedback.

And when feedback moves later, risk moves with it.


The Social Cost Nobody Mentions

This changes far more than code flow.

It changes the social structure of development itself.

Instead of reinforcing each other during construction, developers increasingly become delayed reviewers, test-runners, or approval gates. The shared act of building gives way to a serialized process of isolated work followed by late validation.

That may still produce working software, but it does not produce the same quality of team thinking.

The old model created friction early, while people were still shaping the solution together. The newer model often postpones that friction until after mental commitment has set in. At that point, integration becomes negotiation rather than collaboration.

That is a significant regression.

A team stops behaving like a team.

It starts behaving like a collection of individuals working in parallel and negotiating reality afterward.

And once that happens, the pipeline begins to replace the team as the thing that “validates” software.

That is a dangerous substitution.

Because a team can challenge assumptions, surface ambiguity, and expose misunderstandings while the system is still being shaped.

A pipeline cannot.

It can only tell you whether a predefined process passed.

It cannot tell you whether the software still makes sense.


The Illusion of Delivery Maturity

Continuous Delivery has suffered a parallel fate.

In theory, CD makes deployments safe by making them repeatable. In practice, many teams achieve “safety” by surrounding brittle systems with ever-growing layers of process, abstraction, and automation. The application becomes harder to understand. The deployment model grows more complex. And the pipeline swells to absorb complexity that should never have existed in the software itself.

Eventually, the release system becomes more elaborate than the software it delivers.

This raises an uncomfortable question:

Are we automating a healthy system, or are we automating around an unhealthy one?

If deployment is difficult, brittle, or mysterious, there are usually only two explanations:

  • The system genuinely operates in a complex environment.

  • The software was never designed with operability in mind.

The first is sometimes unavoidable.

The second is too often ignored.


Good Deployment Begins in Design

Much deployment pain is treated as the inevitable cost of “modern systems.” In many business applications, that pain is not inevitable — it is designed in.

A well-engineered application should be deployable because it was built to be deployable: operational state kept where it belongs, environment-specific behavior minimized, startup made deterministic, migrations treated as part of the lifecycle, and only what truly needs to vary externalized.

When deployment is simple by design, the need for pipeline heroics drops dramatically.

Automation then becomes what it was meant to be: a way to remove repetition and error from a sound process — not a bandage for an unsound one.


The Dangerous Slide into “Production as Test Environment”

This is where the earlier shift becomes dangerous.

When integration is no longer happening continuously during development, reality does not disappear.

It simply waits.

And increasingly, that reality is encountered much later — often in environments close to or inside production.

This is why so many modern delivery models quietly drift toward using production as their final validation environment. Not because teams explicitly decide to “test in production,” but because the system as a whole often meets changing real-world conditions there more meaningfully than anywhere before it.

That is a very different feedback model from original CI.

Original CI gave teams rapid feedback on check-ins against a shared and continuously evolving codebase. Modern branch-heavy CI/CD often gives rapid feedback on isolated changes, then relies on deployment frequency to surface what only the integrated whole can reveal.

That is not the same kind of safety.

It is simply a different place to discover reality.

Smaller and faster deployments are often presented as inherently safer.

But that is only true if one quietly assumes that the meaning and impact of a change are already well understood.

In practice, that is often exactly what is not true.

A smaller deployment unit may reduce rollback scope or make blame attribution easier, but that is not the same as reducing actual engineering risk. If anything, the opposite can happen: the change is seen by fewer people, discussed less deeply, and integrated less continuously before it reaches production.

That does not reduce uncertainty.

It merely packages uncertainty into smaller increments.

And when production changes multiple times per day, stability itself begins to shrink. The system is only as stable as the scenarios already captured in the automated tests — tests which are themselves usually adapted to the most recent expected path into production.

That creates a dangerous illusion of control.

The software appears validated, but only within the shrinking boundary of what was recently anticipated.


The Semantic Risk Pipelines Cannot See

More importantly, the true impact of a change is often not visible from the code itself.

A seemingly trivial modification for a developer can carry major domain consequences. And a technically substantial change can sometimes be domain-trivial. That asymmetry matters.

Because developers are not domain experts.

They can understand the implementation, but they cannot reliably infer the full business meaning of a change from code alone — not without sustained discussion and feedback from people who actually understand the domain.

And the most dangerous part is that this is not predictable.

It is not true that every change requires deep domain validation.

But it is also not reliably obvious which changes do.

That is exactly why semantic risk cannot be reduced to diff size, deployment frequency, or pipeline confidence.

Many of the hardest failures are not technical crashes or exceptions. They are semantic failures: the system behaves exactly as the code and tests dictate, yet wrongly according to the business.

That is where domain experts matter.

And no amount of deployment frequency changes that fact.


Human Validation Is Not the Enemy of Engineering

One of the stranger modern assumptions is that removing human judgment from the release path is always progress.

It is not.

There is a crucial difference between automating repeatable mechanics and eliminating deliberate validation. Those should never be conflated.

A strong delivery process should automate the mechanical parts — build, package, verify, deploy to controlled environments, reproduce release steps consistently.

That is sensible.

But whether a business-critical change should be exposed to real users is not always a purely technical question. In many systems, it is also a domain question.

Human validation is not a sign of immaturity. Sometimes it is the last remaining sign that someone still understands the difference between technical correctness and business correctness.

That distinction is too often lost.


Application Quality Is Not Generated By Tooling

Part of the problem is that “quality” itself has increasingly been redefined through the lens of tooling.

In many organizations, delivery practices are no longer primarily shaped by engineers with deep ownership of the software and its domain. They are shaped by process-specialized roles, platform teams, and tooling consultants whose authority often comes from familiarity with delivery systems rather than from responsibility for the software’s behavior, design, or business consequence.

That changes what gets optimized.

Quality slowly stops meaning clarity, simplicity, robustness, and domain correctness.

It starts meaning compliance: green pipelines, approved stages, scan completion, branch policy adherence, and process conformance.

Those may be useful signals.

But useful signals can become dangerous substitutes.

And that is how problem analysis gets replaced by cargo cults.


The Real Regression: Loss of Ownership

Underneath all of this lies a deeper problem than pipelines or deployment buttons.

The quiet regression is the loss of engineering ownership.

Modern delivery culture has made it increasingly possible for developers to produce deployable software without truly understanding:

  • how the system runs

  • how it is released

  • how it evolves

  • how it fails

  • how it behaves in production

  • how it fits the business domain as a whole

That is not progress.

That is separation from consequence.

Once that separation occurs, the pipeline stops being a tool.

It becomes a substitute for engineering responsibility.

Pipelines can tell you whether something passed the process.

They cannot tell you whether the software is truly understood.


What Healthy CI/CD Should Actually Look Like

Good CI/CD is not about maximum automation.

It is about preserving engineering discipline while reducing mechanical waste.

That usually looks far less glamorous than modern tooling culture suggests:

  • Developers integrate continuously into a shared mainline

  • Incomplete work is handled through discipline and design, not default branch isolation

  • Build and verification are automated and fast

  • Deployment to lower environments is repeatable and low-friction

  • Acceptance happens in a controlled way

  • Production deployment is simple enough to trust

  • Human validation exists where domain risk justifies it

  • The release path is designed to support ownership, not replace it

That is not anti-automation.

It is anti-theater.

And that distinction matters.


The Real Question

CI/CD is not really a tooling question.

It is a quality question.

The real issue is not whether a team has pipelines, feature flags, deployment jobs, or environment promotion stages.

The real issue is this:

Does the delivery process reflect a well-engineered system and a team that understands it — or is it compensating for the absence of both?

That is the question most teams avoid.

Because if the honest answer is the second one, then the pipeline is not a sign of maturity.

It is camouflage.

And that may be the most uncomfortable truth in modern software delivery:

sometimes what looks like engineering progress is really just process growth around declining engineering depth.

CI/CD used as a substitute for the very discipline it was supposed to support.

And once that happens, delivery stops being an expression of engineering quality.

It becomes a process for moving misunderstood software into production more efficiently.

Top comments (0)