DEV Community

Cover image for Your Cypress Tests Are Slower Than You Think
David Ingraham
David Ingraham

Posted on

Your Cypress Tests Are Slower Than You Think

If you’ve worked on a decent-sized Cypress suite, you’ve lived this.

You push code. Pipeline kicks off.
Then you wait.
And you wait.
…Still waiting.

You tab over to Slack. Maybe check a PR. Start something else.
Eventually results come back. Green, red, who knows.

But here’s the problem: by the time you see them… you don’t care anymore. That’s the real issue with slow tests. Not time.

It’s that they quietly disconnect themselves from how engineers actually work. And once that happens, the tests don’t just get slower, they stop mattering.

So yeah, we’ll talk about how to speed things up.
But this is really about something else: why slow tests break the feedback loop, and how to fix it before your suite turns into expensive decoration.


The Hidden Cost of Slow Tests

Everyone talks about wait time. That’s the obvious part.

The real cost is behavioral.
Fast tests keep developers in the loop:

write code → run tests → fix problem → move on

It’s tight. Feedback is immediate. You stay in context.
Slow tests break that loop:

write code → push commit → switch task → failure appears later

Now you’re doing archaeology. And unless you secretly enjoy digging through commits like it’s a crime scene, it’s not a great place to be.

Multiply that across a team and something subtle happens.
People stop waiting.

Tests still run. Pipelines still pass or fail.
But they stop influencing decisions.

That’s the line most teams don’t notice crossing.

And if you’ve been around long enough, you know where that ends.

First they’re flaky.
Then they’re ignored.
Eventually, someone suggests deleting or skipping them “just for now.”

And somehow, they never come back.


The 1-Second Problem

Most slow test suites don’t have a single obvious culprit, they bleed out from a hundred small paper cuts.

A second here.
Two seconds there.
A login repeated 100 times.
A page load you didn’t question.
One hard-coded wait that turns into ten next month.

Each one feels harmless. That’s the problem though. Tests don’t run once — they run at scale. And when you scale up a single second, it adds up fast.

For example:

1 second × 300 tests = 5 minutes

Five minutes, from something no one even noticed or questioned. Now imagine what happens when you have ten of these 1-second problems. Now 100. That’s how teams end up with 30-minute pipelines without ever making a “bad” decision. It’s not one accidental mistake.

It’s long-term, unnoticed accumulation.


Test Optimizations

Now that we’ve defined why this matters, let’s actually fix it. We’ll be breaking up the solutions into three core sections, Test, Pipeline and Startup Optimizations. First, let’s start where the damage usually begins, inside the tests.

Replace Hard-Coded Waits

Hard waits are one of the easiest ways to waste time and one of the easiest problems to spot.

cy.wait(5000)
Enter fullscreen mode Exit fullscreen mode

This ensures that your test is sitting there for 5 seconds, regardless if the app actually responded in 200ms. Every time.

Instead, wait for real signals. Use dynamic-waits instead, such as waiting on real endpoint using cy.intercept.

// Define an intercept on a real API
cy.intercept('/api/orders').as('orders')

cy.visit('/')

// Wait on the real load before interacting with the UI
cy.wait('@orders')
cy.get('order-button').click()
Enter fullscreen mode Exit fullscreen mode

You can also add “safe-guard” assertions on the UI too, utilizing Cypress’s built-in retry system.

cy.visit('/')

// Wait for the button to exist in the DOM and be in a clickable state
cy.get('order-button').should('be.visible')
cy.get('order-button').should('be.enabled').click()
Enter fullscreen mode Exit fullscreen mode

The key idea is simple: don’t wait for time, wait for state.

Cache Login Sessions

Login flows are also silent killers and a great example of the 1-second problem. They don’t look expensive but run across hundreds of tests and suddenly you’ve built a login simulator, not a test suite.

This is especially common in suites where every test starts the same way:

cy.visit('/')
cy.get('[data-cy=email]').type('test@test.com')
cy.get('[data-cy=password]').type('password')
cy.get('[data-cy=login]').click()
Enter fullscreen mode Exit fullscreen mode

Again, this looks harmless until you realize you’re doing it 200+ times per run. Instead, use cy.session. Cypress gives you a built-in way to cache authentication and reuse it across tests.

cy.session('user', () => {
  cy.visit('/')
  cy.get('[data-cy=email]').type('test@test.com')
  cy.get('[data-cy=password]').type('password')
  cy.get('[data-cy=login]').click()
})
Enter fullscreen mode Exit fullscreen mode

Now instead of logging in every time, Cypress will:

  • run this once
  • cache the session
  • restore it for the rest of your tests

No extra UI steps. No repeated logins. No wasted time.

Avoid Redundant Checks

To be clear, assertions aren’t the problem. End-to-end tests live or die by the assertions they make. The issue is repeated work.

Cypress actually encourages multiple assertions when they help prove meaningful behavior. The problem shows up when tests keep re-querying the DOM or stacking checks that don’t add much confidence. Every extra cy.get() call isn’t free.

Take a look at this example validating a simple success state.

cy.get('.order').should('exist')
cy.get('.order').should('be.visible')
cy.get('.order').should('contain', 'Order created')
cy.get('.order-title').should('exist')
cy.get('.order-price').should('exist')
Enter fullscreen mode Exit fullscreen mode

Yes, this proves the order exists and contains the expected text but at some point, we’re not improving the test anymore. We’re just making Cypress do more laps, creating a tiny performance bloat.

Most of the time, you can collapse this into something tighter:

cy.get('.order')
  .should('be.visible')
  .and('contain', 'Order created')
Enter fullscreen mode Exit fullscreen mode

Or, if the real goal is proving the user outcome:

cy.contains('Order created')
Enter fullscreen mode Exit fullscreen mode

The goal isn’t fewer assertions for the sake of it. The goal is fewer redundant checks, fewer duplicate queries, and more signal per line of test code.

Mock Expensive Requests

Your app does a lot of things your test does not care about.
Images. Analytics. Recommendation engines. Background noise.

And yet, your tests patiently wait for all of it, especially when running against a prod-like environment. This usually shows up as slow page loads, inconsistent timing, and tests that feel fine locally but drag in CI.

Let’s say your page loads a directory which includes on load:

  • 20 images
  • 3 analytics calls
  • a recommendation engine hitting some external service

None of that matters if your test is just trying to verify:
“Can a user create an order?”

But your test still pays the cost for it. Every time.

Instead, intercept and stub these requests to skip the overhead entirely, using our trusty friend cy.intercept again.

cy.intercept('/images/*', { fixture: 'placeholder.png' })
cy.intercept('/analytics/*', { statusCode: 200 })
cy.intercept('/recommendations/*', { fixture: 'empty.json' })
Enter fullscreen mode Exit fullscreen mode

Now our test is going to load faster with reduced network noise for a test that has no dependencies on these calls.

That being said, mock responsibly. Don’t mock if:

  • the request is part of the core user flow
  • you’re validating real integration behavior
  • the response directly impacts what the user sees or does

If your test is about creating an order, then test that, not whether your dashboard images finished loading first.


Pipeline Optimizations

Even with clean, efficient tests, your pipeline can still be the bottleneck.

At some point, the slowdown isn’t in your tests anymore. It’s in your infrastructure. Even if your tests are clean, your pipeline can still be the bottleneck.

Run Tests in Parallel

This is usually the biggest win available and pretty straight-forward.

If you’re running everything on a single machine, you’re leaving speed on the table.

Example with 200 tests:

  • 1 machine → ~20 minutes
  • 4 machines → ~5 minutes

Same amount of tests. Same coverage. Same value. Just distributed.

If you’re not parallelizing, you’re choosing to be slow.

Most CI providers support this out of the box and Cypress provides it with integration into Cypress Cloud. Without Cloud there are free plugins that achieve the same thing, such as cypress-split.

Your Pipeline Is Only As Fast As the Slowest Spec

Parallelization helps, until one, pesky spec decides to ruin everything.

spec A → 12 minutes  
spec B → 3 minutes  
spec C → 3 minutes
Enter fullscreen mode Exit fullscreen mode

Your pipeline time? Yeah, 12 minutes.

This is where a lot of teams get tripped up. They split tests by file count instead of runtime.

Knowing your slowest spec lets you target the biggest performance bottlenecks directly instead of guessing.

Focus on outliers. Break them up if possible, or optimize what’s inside them and then track again. You should always know your slowest tests and why they’re slow.

If possible, all specs should execute within a ~1 minute threshold of one another to ensure scaleable balance as the test suite grows and additional resources are to your parallelization.

Cache Cypress and Node Modules

CI loves to pretend it’s starting from scratch every run.

Downloading Cypress. Installing dependencies. Setting up browsers.

But want to know a secret? You don’t need to pay that cost every time.

Cache:

  • node_modules
  • Cypress binary
  • browser dependencies (if applicable)

This is one of those optimizations that feels boring but it shaves minutes off every single run. Always review best practices around CI and your CI provider to ensure your pipeline isn’t dragging unintentionally.

Avoid Persisting Artifacts for Passing Tests

Videos and screenshots are incredibly useful when something breaks.

When everything passes? They’re just expensive souvenirs.

If you’re uploading artifacts for every run, you’re increasing pipeline time, increasing storage costs, and making it harder to find what actually matters.

Turn them off for green runs. Keep them for failures where they’re actually useful.


Startup Optimizations

Some of the worst delays don’t come from the tests themselves, they happen before the first test even runs.

And most teams miss it, because they’re focused on runtime, not startup.

Use APIs for Test Setup

UI data setup is the scenic route. And trust me, I love a gorgeous view just like the next person, but our end-to-end tests don’t need the extra travel.

Let’s pretend we want to test deleting a “project” from our application.

This is what it often looks like, simplified:

cy.visit('/')
cy.get('[data-cy=email]').type('test@test.com')
cy.get('[data-cy=password]').type('password')
cy.get('[data-cy=login]').click()

cy.get('[data-cy=create-project]').click()
cy.get('[data-cy=project-name]').type('Test Project')
cy.get('[data-cy=submit]').click()
Enter fullscreen mode Exit fullscreen mode

Without realizing it, our test is validating the create flow and dependent on that UI for delete functionality that should be isolated. Expand this and a longer UI-setup might take 5–10 seconds just to set state. Multiply that across your suite and the cost becomes obvious.

Instead, utilize cy.request to setup the state programatically.

cy.request('POST', '/api/projects', data)
Enter fullscreen mode Exit fullscreen mode

Now your setup runs in milliseconds instead of seconds. I wrote about this topic in-depth in a blog here and how to easily achieve this pattern at scale in your test suite.

In short, use the UI when you’re testing the UI.
Not when you’re just setting the stage to get to real value.

Avoid Heavy Imports in Support Files

Every Cypress spec loads your support files.
Simply, whatever you import there, runs every time.

If you’re pulling in half your project globally, you’ve created a startup tax.
For clarification, here’s a github issue that explains this very problem.

Just like everything else we’ve mentioned, it adds seconds to every spec file. So only load what you actually need.

  • move heavy imports into specific tests
  • lazy-load where possible
  • keep your support file lean

Here’s an in-depth article from the talented Murat Ozcan that explains this concept in expanded details.

Review Your Preprocessor

Preprocessing is one of those hidden slowdowns most teams don’t think about. Essentially, bundling (or preprocessing) is the step where Cypress compiles and transforms your test files and their dependencies into browser-ready JavaScript before execution.

And in short, if that step is slow, everything is slow.

Cypress uses Webpack by default and for the most part it works. Nearly all teams are using it for this very reason. However, it’s also not exactly famous for being quick in large test setups and might struggle compared to alternative solutions.

If your suite has grown, it’s worth looking at faster preprocessors like esbuild(another fantastic video by Murat here) or Vite (using cypress-vite). In a lot of projects, that switch alone noticeably cuts startup time.

If you are noticing significant performance latency just starting Cypress before tests run, then it might be time to reconsider your preprocessor.

Review Your Config Tip

Finally, sometimes the slowdown isn’t hiding in your tests at all, it’s sitting in your cypress.config quietly adding overhead to every run.

There are a few settings that are worth revisiting with performance in mind: retries, video, screenshots, and numTestsKeptInMemory.

Retries are the easiest one to overlook. They feel helpful, and they also hide flakiness and multiply runtime. A test that passes on the third attempt didn’t just pass, it ran three times.

Video and screenshot settings fall into a similar category. They’re incredibly useful when something breaks, but they don’t always need to be enabled for every passing test.

The point isn’t to turn everything off. It’s to be intentional.

It’s also worth staying reasonably up to date with Cypress itself. Performance improvements do get shipped, and running an older version longer than you need to can quietly hold you back in ways that have nothing to do with your tests.


Final Tip: Be Data-Driven About Your Problem Tests

If you don’t measure it, it will rot.
Test suites don’t stay fast by accident.

You need to know where your time is going: your slowest tests, your slowest specs, and your flakiest tests.

That’s where the real problems live.

Otherwise, you’re guessing — and guessing doesn’t scale.

In most suites, a small number of slow or flaky tests are doing most of the damage. That’s where you start. Fixing your worst offenders will move the needle far more than tweaking 50 “okay” tests. Use data to identify what’s actually hurting you, surface flaky tests causing retries, and keep an eye on runtime drift over time.

Then fix what matters.

That’s how you get faster pipelines, more reliable results, and a test suite people actually trust.

For help identifying where your test flake is occurring, Sebastian Suero has a great plugin that can help with that, cypress-flaky-test-audit. And for how to build your own merge report to track these metrics, you can find my tutorial here.


Final, Final Tip: Does It Need to Be End-to-End?

The hard truth is, after all these other tips, a lot of slow suites are slow because end-to-end tests are carrying work that does not actually belong there. This happens all the time on startup teams, or on teams where QA sits in its own lane and E2E becomes the default place to validate everything.

Cypress supports both end-to-end and component testing, and it's docs are pretty clear that different test types exist for different kinds of confidence. If a check can be proven faster at the API or component level, that is usually the better place for it. That does not mean E2E is less valuable. It just means it should be used where it shines: proving real user flows, not carrying the entire quality strategy on its back.

Cypress can also be used for API-style testing, and community plugins like cypress-plugin-api by Filip Hric, build on that by adding a cy.api command for API-focused workflows in the Cypress runner.

On the UI side, Cypress Component Testing gives you a way to validate rendering, state, and interaction at the component layer, which is often much faster and more targeted than proving the same thing through a full browser journey. A healthy suite usually has a mix of all three.


Summary

Most slow test suites don’t collapse because of one bad decision. They decay over time. A login repeated too often, a wait no one questioned, a setup flow that felt “good enough.” Individually harmless, but together they quietly drag down your feedback loop.

And once that loop breaks, your tests stop shaping behavior. Failures show up too late, context is lost, and fixing issues becomes slower and more painful than it needs to be.

The most valuable moment for a failure isn’t ten minutes later in CI. It’s right after you push, while everything is still fresh.

Fast tests protect that moment. Slow tests erase it.

With that, happy testing.

Top comments (0)