DEV Community

Cover image for Why Your AWS CI/CD Pipeline maybe Slower Than It Should Be (Mine Was Too)
Harsh Thakkar
Harsh Thakkar

Posted on

Why Your AWS CI/CD Pipeline maybe Slower Than It Should Be (Mine Was Too)

It was one of those days where nothing was technically broken… but everything felt off.

Deployments were going through. Pipelines were green. No alarms screaming.
And yet every push took forever.

I remember staring at the screen after triggering a simple change. A tiny config tweak. Something that should’ve gone through in a couple of minutes. Instead, I watched my AWS pipeline crawl… step by step… like it had all the time in the world.

For a YAML change.


At the time, I told myself, “Yeah, CI/CD pipelines are just slow sometimes.”
That was my first mistake.

The lie we tell ourselves

If your pipeline works, you stop questioning it.

That’s what I did.

I had a pretty standard setup:

  • Code pushed → CodePipeline triggers
  • CodeBuild runs tests + build
  • Artifacts go to S3
  • Deploy via CodeDeploy

Nothing exotic. No weird hacks. It looked clean.

But under the surface, it was quietly inefficient in ways I didn’t notice for months.


The moment it clicked

One afternoon, I had to deploy 5 times in a row (😅).
Same pipeline. Same steps. Same wait… every time.

That’s when it hit me:

I was spending more time waiting for my pipeline than actually coding.

And worse… I had accepted it.


Where the time was actually going

I finally sat down and traced a single run end-to-end. Not casually. Properly.

And yeah… it was uncomfortable.

1. CodeBuild was doing way too much

I had bundled everything into one build phase:

  • install dependencies
  • run tests
  • build artifacts
  • package everything

Seemed efficient, right?

Except… every single run started from scratch.

No caching.

So even if I changed one line, it:

  • reinstalled node modules
  • rebuilt layers
  • redid everything like it had never seen my project before

That alone was eating 6–8 minutes.

What I didn’t realize at the time:

Stateless builds are great… until they’re unnecessarily stateless.


2. I ignored caching because it felt “optional”

AWS makes caching in CodeBuild possible, but not exactly obvious.

I skipped it initially because:

  • It adds config complexity
  • Cache invalidation is annoying
  • “It’s fine for now”

Classic.

When I finally enabled caching for dependencies (node_modules, pip, etc.), build times dropped almost immediately.

Not dramatically. But noticeably.

Still… caching comes with trade-offs:

  • Sometimes stale dependencies sneak in
  • Debugging weird build issues becomes harder
  • You need to think about cache keys (which I initially didn’t 😅)

3. Serial execution everywhere

This one hurt a bit.

My pipeline stages were strictly linear:

Build → Test → Package → Deploy

No parallelism. No optimization.

Even independent steps were waiting on each other.

Looking back, I could’ve:

  • Run tests in parallel with certain build steps
  • Split pipelines by service instead of monolith builds
  • Avoid blocking everything for one slow task

But I didn’t. Because linear pipelines are easy to reason about.

And sometimes… we choose simplicity over speed without realizing the cost.


4. Artifact handling was... lazy

I was passing around large artifacts between stages.
Bigger than they needed to be.

Stuff that didn’t even change between runs was getting repackaged and uploaded again.

It wasn’t obvious at first. But S3 upload + download latency adds up.

Especially when:

  • You compress everything every time
  • You don’t separate static vs dynamic assets
  • You treat artifacts like a dumping ground

In hindsight, this was just… sloppy engineering.


5. Over-triggering pipelines

This one was subtle.

Every push triggered the full pipeline even for:

  • README changes
  • minor config tweaks
  • non-deployable updates

So I was burning compute time (and patience) on changes that didn’t need deployment.

A simple filter or conditional trigger would’ve helped.

But I didn’t add it until much later.


What changed after all this

Not overnight. And not perfectly.

But gradually:

  • I split heavy builds into smaller, more focused steps
  • Added caching (carefully… and with some regret during debugging 😅)
  • Introduced conditional triggers
  • Reduced artifact size and duplication
  • Parallelized what I could without making things unreadable

And the result?

My pipeline dropped from ~18 minutes to around 6–8 minutes on average.

Still not blazing fast. But acceptable.

More importantly it felt under control.


The part nobody talks about

Faster pipelines aren’t free.

Every optimization introduces trade-offs:

  • Caching → faster builds, harder debugging
  • Parallelism → speed, but more complexity
  • Smaller artifacts → better performance, but more structure required
  • Conditional triggers → efficiency, but risk of missing deployments

There’s no perfect setup.

Just… intentional ones.


What I’d do differently now

If I were starting fresh:

I wouldn’t aim for the perfect pipeline.

I’d aim for visibility first.

  • Measure each stage early
  • Understand where time goes
  • Optimize only what actually hurts

Because honestly…

Most pipelines aren’t slow because of AWS.
They’re slow because of decisions we stopped questioning.


Final thought

If your pipeline feels slow, it probably is.

And if you’ve gotten used to it… that’s the real problem.

I did too.

Until one day I couldn’t ignore it anymore.

Top comments (0)