DEV Community

Tudor Brad
Tudor Brad

Posted on • Originally published at betterqa.co

Selenium vs Cypress: what we actually use and why

We have about 50 engineers across 24 countries working on client QA projects. On any given week, some of those projects run Cypress, some run Selenium, and a few run both. We did not pick sides. The client's stack, timeline, and constraints pick for us.

This is what we have learned from running both frameworks in production across dozens of projects. Not a feature matrix you can find on either tool's website. The actual pains and gains we deal with.

Where Cypress wins and why we reach for it

Cypress is the faster path to a working test suite on most modern web apps. That is the single biggest gain.

On a React or Vue SPA, a new tester can have a Cypress test running within an hour of cloning the repo. Install it, write a spec, run it. No driver downloads, no browser binaries to manage, no WebDriver protocol quirks. The test runner shows you what happened at each step with DOM snapshots. When a test fails, you can time-travel through the state to see exactly what went wrong.

For teams that write JavaScript and build SPAs, Cypress removes a pile of friction:

  • No driver management. Selenium needs ChromeDriver, GeckoDriver, etc., and they break every time Chrome auto-updates. Cypress bundles its own browser management.
  • Automatic waiting. Cypress retries assertions until they pass or time out. In Selenium, you write explicit waits or sleep statements, and you still get flaky tests.
  • Network stubbing built in. Intercepting API calls, mocking responses, testing error states: all native. In Selenium, you need a proxy tool like BrowserMob or mitmproxy.
  • Readable test output. The Test Runner GUI is genuinely useful for debugging. Selenium's output is a stack trace and a prayer.

We use Cypress on most greenfield SPA projects unless the client has a specific reason not to. It is the default recommendation when someone asks "what should we automate with?"

Where Cypress hurts

Here is the part that Cypress's marketing does not put on the homepage.

Single browser tab only. Cypress runs inside the browser. It cannot open a second tab. If your app opens a link in a new tab, sends you to an OAuth provider in another window, or does anything involving multiple browser contexts, you are stuck. We have had to rewrite application code to work around this on two separate client projects. That is not a testing framework problem, that is a testing framework creating an application problem.

Cross-origin is painful. Cypress historically blocked cross-origin navigation entirely. They added cy.origin() to handle it, but it is clunky. If your login flow redirects through an identity provider on a different domain, expect to spend time fighting Cypress rather than testing your app.

JavaScript only. Your test code must be JavaScript or TypeScript. If the team writes Python or Java and nobody knows JS, Cypress is not "easy to learn." It is easy to learn if you already know the language it requires. We have had QA engineers comfortable with Python spend weeks getting productive in Cypress because the language was the barrier, not the framework.

No mobile testing. Cypress tests web browsers. Period. If you need to test a native mobile app, or even a responsive site in an actual mobile browser, you need a different tool. We pair Cypress with Appium on projects that have both web and mobile, which means maintaining two frameworks anyway.

iframes are a headache. Cypress and iframes have a long, troubled history. The cy.iframe() command from community plugins works sometimes. Payment forms (Stripe, Braintree) that embed in iframes are consistently annoying to test with Cypress.

No parallel by default. Cypress's free tier runs tests sequentially. Parallel execution requires Cypress Cloud (paid) or a third-party orchestrator. On a project with 400+ tests, sequential runs took over 40 minutes. That kills CI feedback loops.

Where Selenium wins and why it survives

Selenium is 20+ years old and looks it. The API is verbose. The documentation sprawls across multiple projects. Setting up a grid for parallel execution is an infrastructure project in itself. Nobody loves writing Selenium tests.

But Selenium handles things Cypress cannot:

  • Any browser, any language. Java, Python, C#, Ruby, JavaScript, Kotlin. Chrome, Firefox, Safari, Edge, even IE if you have been cursed. A QA team can use whatever language they already know.
  • Multiple tabs and windows. driver.switchTo().window() just works. OAuth flows, popup windows, payment redirects: all testable without workarounds.
  • Cross-origin is not special. Selenium controls the browser from outside. It does not care what domain you navigate to.
  • Mobile testing via Appium. Appium is built on the WebDriver protocol. Skills and patterns transfer directly from Selenium to Appium. Your page object models work in both.
  • Mature ecosystem. Selenium Grid, Docker images, cloud providers (BrowserStack, Sauce Labs, LambdaTest) all support Selenium natively. The infrastructure is battle-tested.
  • Non-browser automation. With Appium's desktop drivers, you can automate Windows and macOS desktop apps using the same WebDriver API. Cypress cannot touch anything outside a browser.

We use Selenium on projects with complex auth flows, multi-window interactions, legacy browser requirements, or mixed web-and-mobile testing needs. It is also our choice when the QA team already has Java or Python expertise and there is no budget to retrain.

The pains we live with on Selenium

Selenium's problems are real and we deal with them weekly:

Flaky tests from timing issues. Selenium does not auto-wait. You write explicit waits, implicit waits, fluent waits. You still get StaleElementReferenceException at 2 AM in CI. Every Selenium project accumulates a utility class of retry helpers, and every team writes them slightly differently.

Driver version mismatches. Chrome 124 ships, ChromeDriver 124 is not ready yet, CI breaks. Selenium Manager (added in Selenium 4.6) helps, but we still see this on projects with locked-down CI environments that cannot auto-download drivers.

Verbose test code. A simple "click this button and check the text" test is 15 lines in Selenium and 3 lines in Cypress. Over hundreds of tests, that verbosity adds up. Code reviews take longer. New team members need more ramp-up time.

Grid management overhead. Running Selenium Grid (even with Docker) is operational work. Someone has to maintain the images, handle node scaling, debug session allocation. Cloud providers solve this but cost money.

No built-in visual feedback. When a Selenium test fails, you get a stack trace. Maybe a screenshot if you configured the teardown to capture one. There is no interactive debugger, no time-travel, no DOM snapshot. You read logs and re-run.

What about Playwright?

We would be dishonest if we did not mention Playwright here. Microsoft's framework has taken over a significant chunk of new projects since 2023. It handles multi-tab, cross-origin, and multiple browsers natively. It auto-waits like Cypress. It supports JavaScript, TypeScript, Python, Java, and C#.

On new projects where the team has no existing framework investment, we now recommend Playwright over both Selenium and Cypress more often than not. But Playwright is not the point of this article, and the reality is that most of our active client projects still run Selenium or Cypress because switching frameworks mid-project rarely makes business sense.

What we built to deal with both

One problem we kept hitting: QA engineers who were strong at manual testing but struggled to write automation code in either framework. We built Flows, a Chrome extension that records browser interactions visually and exports them as executable tests.

Flows does not replace either framework. It gives manual testers a way to create automated tests without writing code, and it gives automation engineers a starting point they can refine. When a recorded flow captures a complex user journey, the engineer can export it and clean it up rather than writing every step from scratch.

We built it because we were tired of the same bottleneck on every project: too many manual test cases, too few automation engineers, and a backlog of "we should automate this" tickets that never got done.

How we decide on each project

Our actual decision process is not complicated:

Pick Cypress when:

  • The app is a JavaScript/TypeScript SPA
  • The team knows JS
  • There are no multi-tab or cross-origin flows
  • No mobile testing requirement
  • The client wants fast CI feedback on a small-to-medium test suite

Pick Selenium when:

  • The team knows Java, Python, or C# and does not want to learn JS
  • The app has multi-window flows, OAuth redirects, or iframe-heavy payment forms
  • Mobile testing is also needed (Appium integration)
  • The client requires Safari or legacy browser coverage
  • There is an existing Selenium suite that works

Consider Playwright when:

  • Starting fresh with no existing framework
  • Need multi-browser, multi-tab, and cross-origin support
  • Team can work in JS/TS, Python, or Java
  • The client is open to a newer tool

Use Flows when:

  • Manual testers need to contribute to automation
  • There is a large backlog of manual test cases to convert
  • The team wants visual test recording regardless of the target framework

Neither framework solves your real problem

The honest answer nobody wants to hear: the framework choice matters less than most teams think. We have seen terrible test suites in Cypress and excellent ones in Selenium. The difference was never the tool. It was whether the team had clear test strategies, maintained their tests, and ran them consistently.

A Cypress suite that nobody maintains after sprint 3 is worse than no automation at all. It gives false confidence. A Selenium suite with proper page objects, good waits, and regular maintenance catches real bugs in production.

Pick the tool that fits your team and your app. Invest the time you save on setup into writing tests that actually matter. If you are spending more time debating frameworks than writing tests, you have already lost.


We write about testing from the perspective of a team that does it for a living across dozens of client projects. More at betterqa.co/blog.

Top comments (0)