Conquering Cypress Test Failures: A Comprehensive Guide to Common Errors & Automated Reporting
The build is red. Again.
Conquering Cypress Test Failures: A Comprehensive Guide to Common Errors & Automated Reporting
The build is red. Again.
As a QA engineer or SDET, this is a familiar sight. While failed tests are often seen as roadblocks, they are, in fact, invaluable feedback loops. They tell us where our application breaks, where our assumptions are wrong, or where our tests need improvement. But sifting through dozens, or even hundreds, of failed test logs after a large Cypress run can be a daunting, time-consuming, and often frustrating task.
“Was that a flaky element visibility issue, or did the API truly return a 500 error?” “Is this a critical login bug that blocks everything, or just a minor UI glitch?”
Without structured analysis, these questions lead to manual digging, delayed debugging, and slower releases.
This article will demystify the most common Cypress test failure reasons (errors and exceptions), providing clear explanations and actionable insights. More importantly, we’ll introduce a powerful Node.js script, generate_failed_tests_report.js
, that automates the analysis of your Cypress Mochawesome reports, transforming chaotic red logs into organized, prioritized, and actionable CSV reports.
Let’s turn that frustrating red into a clear path forward!
The Unavoidabnode generate_failed_tests_report.js
le Truth: Understanding Common Cypress Test Failures
Every Cypress test failure, whether an AssertionError
, TypeError
, or CypressError
, tells a story. Learning to read these stories efficiently is paramount for robust test automation.
We’ve categorized and prioritized these failures just as our script does, to give you a framework for quick triage.
🔴 Critical Priority Failures: Stop Everything!
These errors typically indicate fundamental issues that prevent significant portions of your test suite, or even the entire application, from functioning. Address these first.
- 1.
**TypeError**
/**TypeError (Runtime)**
- What it is: A JavaScript error that occurs when an operation is performed on a value that is not of the expected type (e.g., trying to call a method on
undefined
ornull
). - Common Causes in Cypress:
- Element Not Found (Indirectly): You try to chain a command like
.click()
or.type()
onto acy.get()
that yielded no element. While Cypress's retry mechanism is robust, if the element genuinely doesn't appear, the subsequent chained command on a non-existent element might throw aTypeError
. - Incorrect Page Object Model (POM) Usage: Calling a method on an imported page object that hasn’t been properly initialized or defined.
- Custom Command or Utility Function Issues: Errors within your custom Cypress commands or helper functions, indicating a variable is not what it’s expected to be.
- How to Debug/Fix:
- Verify element existence and visibility before interaction (Cypress often handles this, but explicit
cy.should('exist')
orcy.should('be.visible')
can clarify intention). - Check for correct imports and instantiation of your POMs or utility classes.
- Inspect
err.estack
for the exact line number in your test code or custom command. - Example Messages:
TypeError: cy.verifyAnyCellInTable is not a function
,someVariable.someMethod is not a function
- 2.
**ReferenceError**
/**ReferenceError (Runtime)**
- What it is: A JavaScript error that occurs when a variable or function is referenced that has not been declared or is out of scope.
- Common Causes in Cypress:
- Undeclared Variables: Using a variable name that hasn’t been defined with
const
,let
, orvar
. - Typographical Errors: Simple typos in variable or function names.
- Missing Imports: Forgetting to
import
a required module or function from another file. - How to Debug/Fix:
- Double-check variable and function spellings.
- Ensure all necessary modules are correctly imported at the top of your test file or support file.
- Check the scope of your variables.
- Example Messages:
ReferenceError: trimText is not defined
,myFunction is not defined
- 3.
**Before Each Hook Failure**
- What it is: An error occurring within a
before()
orbeforeEach()
Cypress hook. Since these hooks are crucial for setting up the test environment (e.g., logging in, visiting a page), their failure often prevents all subsequent tests in that suite from running. - Common Causes in Cypress:
- Login Failures: Incorrect credentials, login form changes, API authentication issues.
- Initial Page Load Issues:
cy.visit()
failing due to incorrect URL, application not running, or network problems. - Data Setup Errors: Problems with
cy.fixture()
to load test data, orcy.intercept()
not intercepting expected requests. - Critical Element Missing: An element that’s expected to be present on page load (e.g., a dashboard header) is not found.
- How to Debug/Fix:
- Isolate the hook and debug it independently.
- Verify login credentials and flow manually.
- Ensure the application is running and accessible at the
baseUrl
orcy.visit()
URL. - Check network requests (using Cypress’s network tab or
cy.intercept()
) during the hook. - Example Message:
Because this error occurred during a
before eachhook we are skipping the remaining tests in the current suite:
🟠 High Priority Failures: Application Under Test isn’t Behaving!
These are often direct indicators of bugs or regressions in your application’s functionality or UI. They require immediate attention as they represent user-facing issues.
- 4.
**AssertionError**
- What it is: The most common type of failure, indicating that an
expect()
assertion (e.g.,.should()
) failed. This means the actual state of the application did not match the expected state. Cypress's built-in retry-ability means these assertions are attempted multiple times before failing, usually culminating in a "Timed out retrying after Xms" message. - Common Causes in Cypress:
- Element Not Found (
**Expected to find element: X, but never found it.**
): The DOM element specified by your selector (tbody tr
,[data-testid=table_button_edit]
,button.export
, etc.) was not present on the page within the given timeout. This can be due to: - Element truly missing or removed.
- Incorrect CSS selector.
- Element not yet rendered by the application (race condition).
- Content/Value Mismatch (
**expected 'X' to be 'Y'**
,**expected 'X' to match 'Y'**
,**expected 500 to equal 200**
): The text content, attribute value, or API response status is not what the test expected. - Visibility Issues (
**expected 'X' to be 'visible'**
): An element exists in the DOM but is not visually displayed (e.g.,display: none
,opacity: 0
, off-screen, or covered by another element). - How to Debug/Fix:
- For Element Not Found:
- Inspect the DOM for the correct selector.
- Add
cy.wait()
for an API call orcy.get().should('be.visible')
before interaction to allow elements to render. - Confirm application data or state that dictates element presence.
- For Content Mismatch: Verify the expected content/value against the actual application behavior.
- For Visibility: Check CSS properties, overlaying elements, or asynchronous rendering.
- Example Messages:
-
AssertionError: Timed out retrying after 30000ms: Expected to find element:
tbody tr, but never found it.
-
AssertionError: expected 'https://auth.microtecstage.com/Account/Login?returnUrl=...
-
AssertionError: expected '<label.form-label.paragraph_b18>' to be 'visible'
- 5.
**UI Visibility Error**
- What it is: A custom error (likely thrown by your own utility functions or assertions) indicating that a specific UI component, like a table, was expected to be visible but was not.
- Common Causes in Cypress: Similar to
AssertionError
visibility issues, but specific to components you've encapsulated with custom checks. - How to Debug/Fix: Investigate the specific component’s rendering and styling.
🟡 Medium Priority Failures: Interactivity & Timing Issues
These errors often highlight issues with how Cypress interacts with a dynamic application, or subtle timing problems. They can be indicative of flaky tests and may require adjustments to test logic or waiting strategies.
- 6.
**CypressError**
- What it is: A general error thrown directly by Cypress, indicating an issue with a Cypress command or its internal state.
- Common Causes & Specific Details:
-
**Cypress Visit Load Failure**
(**cy.visit()**
failed trying to load): The browser couldn't load the specified URL. This might mean the app is down, the URL is wrong, or a severe network issue. -
**Cypress Click Stale Element**
(**cy.click()**
failed because the page updated): An element that Cypress targeted for clicking was re-rendered or removed from the DOM between when Cypress found it and when it tried to click it. This is a common race condition in highly dynamic SPAs. -
**Cypress Click Covered Element**
(**cy.click()**
failed because this element is being covered by another element): Cypress detects that another DOM element is obstructing the element it's trying to click, mimicking real user behavior (users can't click through overlays). - **
Cypress Wait Request Timeout
(cy.wait()
timed out waiting... No request ever occurred.):** A
cy.wait()` alias was set for a specific network request, but the application never sent that request within the timeout. This often means the action that was supposed to trigger the request failed, or the app's network logic changed. -
**Cypress Match Argument Invalid**
(**match**
requires its argument be a**RegExp**
): Occurs when a command or assertion expecting a regular expression (e.g.,cy.contains(/text/i)
) receives an invalid argument. - **
Cypress Scroll Multi-Element
(cy.scrollIntoView()
can only be used to scroll to 1 element):** Your selector for
cy.scrollIntoView()` matched multiple elements, but the command expects a single element to scroll to. - How to Debug/Fix:
- For
**cy.visit()**
: Check application server status, network connectivity, andbaseUrl
configuration. - For
**cy.click()**
(Stale/Covered): - Add more specific waits before interaction (e.g.,
cy.get('element').should('be.visible').click()
). - Ensure modals or loading spinners are dismissed before attempting clicks.
- Rethink timing if DOM changes are happening rapidly.
- For
**cy.wait()**
timeouts: Verify the network request is indeed sent by the application. Addcy.intercept()
logs to confirm the request pattern. - For argument errors: Correct the syntax of your arguments.
⚫ Other/Unknown Priority Failures
-
**Other/Unknown**
/**Unclassified Error Detail**
: - Explanation: This category catches any error message that doesn’t fit the predefined classification rules. It’s a catch-all for less common issues or errors you haven’t yet categorized.
- How to Debug/Fix: These errors require manual inspection of the
err_message
anderr_estack
to understand their nature. If a new pattern emerges, you can add it to theERROR_CATEGORIZATION_RULES
in the script to categorize it automatically in future runs.
Beyond the Red: Introducing the Cypress Failed Tests Report Generator
Understanding errors is one thing; efficiently analyzing them across hundreds of tests is another. Manually sifting through Mochawesome JSON files is tedious and prone to missing critical insights. This is where the generate_failed_tests_report.js
script becomes an invaluable asset for any Cypress automation suite.
This Node.js script automates the process of parsing your merged Mochawesome JSON reports and outputs a clean, structured CSV file.
The Objective of generate_failed_tests_report.js
The script’s core purpose is to provide an actionable, summarized view of failed tests. Instead of just knowing a test failed, it tells you:
- Which test file and suite failed.
- What type of error occurred (
FailingCategory
). - How critical the error is (
Priority
). - What specific detail about the error is most relevant (
FailingCategoryDetail
). - The raw error message and stack trace for in-depth debugging.
This transforms scattered failure data into a focused report that empowers faster triage and more effective debugging cycles.
How It Works: A Deep Dive into the Script’s Anatomy
The generate_failed_tests_report.js
script is built on a logical flow designed for efficiency and customization:
- Reading the Merged Report (
**INPUT_JSON_FILE**
): The script starts by loading yourmergedReport.json
file. This file is typically generated by merging individual JSON reports created by Mochawesome after each Cypress spec run.fs-extra
is used for robust file system operations. - Recursive Test Traversal: Cypress tests can be nested deeply within
describe
blocks. The script smartly navigates this hierarchy. It iterates through the top-levelresults
(each representing a spec file) and then recursively dives into any nestedsuites
arrays, ensuring everytest
object is inspected. - The Power of
**ERROR_CATEGORIZATION_RULES**
: This constant array is the heart of the script's intelligence. It's a collection of rules, each defined by:
-
**regex**
: A regular expression to broadly match a type of error message (e.g.,^AssertionError:
). -
**category**
: The high-level classification (e.g.,AssertionError
). -
**priority**
: The assigned severity (e.g.,High
). -
**details**
: A nested array of more granular rules, each with its ownregex
anddetailMessage
. This is where the magic happens forFailingCategoryDetail
.
- The rules are ordered by precedence: more specific or critical errors are listed first.
**4. getErrorCategoryAndPriority(errorMessage)**
Function: This pivotal function is responsible for the categorization. When a failed test's error message is passed to it:
- It first cleans the message (removes ANSI codes, trims whitespace).
- It then iterates through
ERROR_CATEGORIZATION_RULES
. The first top-levelregex
that matches the error message determines thecategory
andpriority
. - Once a category rule is matched, it then iterates through that rule’s
*details*
array. Again, the firstdetail.regex
that matches the error message determines the specificdetailMessage
. - Dynamic Detail Extraction (Capture Groups): Crucially, if the
detail.regex
contains capture groups (parts in parentheses like([^
]+)), the script dynamically substitutes these captured values into the
detailMessage(e.g.,
$1,
$2`). This is how it can extract the exact selector that wasn't found or the specific route that timed out!
5. Data Structuring for CSV: For every failed test, the script consolidates all the extracted information (file paths, titles, categories, priorities, raw error data) into a clean JavaScript object. These objects populate an array called reportData
.
6. CSV Conversion and Output: Finally, the papaparse
library takes the reportData
array and converts it into a well-formatted CSV string, complete with a header row. This string is then written to failed_tests_report.csv
, ready for review in any spreadsheet software.
Key Benefits of Using the Script:
- Actionable Insights: Move beyond just “fail/pass” to “what failed, how critical, and why specifically.”
- Time-Saving: Automate what would be hours of manual log analysis.
- Consistent Reporting: Generate uniform reports for team communication and stakeholder updates.
- Customizable: Easily extend
ERROR_CATEGORIZATION_RULES
to suit your project's specific error patterns and reporting needs.
Getting Started with Your Automated Report Generator
Ready to transform your Cypress test analysis?
- Prerequisites:
- Ensure Node.js (v14+) is installed.
- Install required packages:
npm install --save-dev fs-extra papaparse
- Important: Add
"type": "module"
to yourpackage.json
file to enable ES module support and prevent Node.js warnings.
json
{
"name": "your-cypress-project",
"version": "1.0.0",
"type": "module",
"devDependencies": {
"cypress": "^X.Y.Z",
"fs-extra": "^A.B.C",
"papaparse": "^D.E.F" }
}
2. Prepare Your Report:
- Execute your Cypress tests to generate Mochawesome JSON reports.
- Merge these individual JSON reports into a single file named
mergedReport.json
in the same directory as your script. (Tools likemochawesome-merge
are commonly used for this.)
3. Run the Script:
- Place the
generate_failed_tests_report.js
file in your project directory (e.g., in the root or ascripts
folder). - Open your terminal, navigate to that directory, and run:
- Bash
sh
node generate_failed_tests_report.js
4. Review the Output:
- A
failed_tests_report.csv
file will be generated in the same directory. Open it with your preferred spreadsheet software (Excel, Google Sheets, etc.) to explore your categorized and prioritized test failures.
Customizing Error Categorization
The true power of this script lies in its flexibility. You can continuously refine the ERROR_CATEGORIZATION_RULES
to better suit the specific error patterns of your application and test suite.
- To add new categories or details:
- Open
generate_failed_tests_report.js
. - Modify the
ERROR_CATEGORIZATION_RULES
array. - Add new objects for broad categories or nested objects within
details
for specific error patterns. - Remember to order your rules from most specific/critical to more general, as the first match wins.
- Utilize regex capture groups
()
and$1
,$2
in yourdetailMessage
for dynamic, highly informative output.
By investing in this automated analysis, you’re not just fixing bugs faster; you’re building a smarter, more resilient test automation strategy. Embrace your red builds, understand their story, and accelerate your path to a greener, more stable product.
By Mohamed Said Ibrahim on June 14, 2025.
Exported from Medium on October 2, 2025.
Top comments (0)