Mastering Cypress Retries: How to Efficiently Re-Run Only Failed Tests with Mochawesome & Dynamic…
Cypress, Failed Tests, Test Retries, Mochawesome, CI/CD, Test Automation, Node.js, @cypress/grep, ENAMETOOLONG, E2E Testing, Quality…
Mastering Cypress Retries: How to Efficiently Re-Run Only Failed Tests with Mochawesome & Dynamic Configuration
Cypress, Failed Tests, Test Retries, Mochawesome, CI/CD, Test Automation, Node.js, @cypress/grep
, ENAMETOOLONG
, E2E Testing, Quality Assurance, Test Optimization
Introduction: The Frustration of Flaky Tests
In the world of End-to-End (E2E) testing, nothing is more frustrating than a flaky test. One moment it passes, the next it fails, often due to transient environmental issues, network latency, or asynchronous timing. Rerunning your entire Cypress test suite every time a few tests fail is a significant time sink, especially in large projects and demanding CI/CD pipelines.
What if you could pinpoint exactly which tests failed, gather their full titles and original spec files, and then execute only those specific tests? Imagine the time savings, the faster feedback loops, and the sheer efficiency!
This article will guide you through building a robust, automated solution using Cypress, Mochawesome reports, Node.js scripting, and the powerful @cypress/grep
plugin. We'll cover:
- Generating Rich Reports: Leveraging Mochawesome to get detailed test results.
- Extracting Failed Test Data: A Node.js script to parse Mochawesome reports and collect failed test titles and their corresponding spec files.
- Dynamic Cypress Execution: Modifying Cypress’s configuration on-the-fly to precisely target only the collected failed tests, bypassing command-line length limitations (e.g.,
ENAMETOOLONG
errors). - Cache Clearing & Data Cleaning: Best practices for maintaining a clean test environment.
Let’s transform your Cypress test recovery strategy from a time-consuming chore into a streamlined, automated process.
Failed Tests
Step 1: Setting Up Mochawesome for Comprehensive Reporting
Mochawesome is a beautiful, interactive, and highly customizable reporter for Mocha (and thus Cypress, which uses Mocha under the hood). It’s essential for getting the detailed data we need to identify failed tests.
1. Install Mochawesome and its Cypress Adapter:
npm install \--save-dev mochawesome mochawesome-merge mochawesome-report-generator
# or
yarn add \--dev mochawesome mochawesome-merge mochawesome-report-generator
2. Configure **cypress.config.ts**
for Mochawesome:
Your cypress.config.ts
(or .js
) needs to tell Cypress to use Mochawesome and merge the reports if you're running tests across multiple spec files.
// cypress.config.ts
import { defineConfig } from 'cypress';
export default defineConfig({
e2e: {
setupNodeEvents(on, config) {
// Required for Mochawesome to merge reports correctly
on('after:run', async (details) => {
if (details.runs) {
const { merge } = require('mochawesome-merge');
const { generate } = require('mochawesome-report-generator');
// Collect all mochawesome JSON files
const mochawesomeJsonFiles = details.runs.map(run =>
run.results && run.results.length > 0 ? run.results[0].reporterStats.reportFilename : null
).filter(Boolean); // Filter out nulls
if (mochawesomeJsonFiles.length > 0) {
const mergedReport = await merge(mochawesomeJsonFiles);
await generate(mergedReport, {
reportDir: 'cypress/reports/mochawesome', // Output directory for the HTML report
reportFilename: 'combined-report', // Name of the HTML report file
overwrite: true,
json: true, // Keep the merged JSON file
});
}
}
});
return config;
},
},
reporter: 'mochawesome',
reporterOptions: {
reportDir: 'cypress/reports/mochawesome-json', // Where individual JSON reports go
overwrite: false, // Don't overwrite individual JSONs
html: false, // Don't generate HTML for individual JSONs
json: true, // Generate JSON for individual runs
},
});
Now, after each cypress run
, you'll find individual JSON reports in cypress/reports/mochawesome-json
and a combined HTML report with its corresponding JSON in cypress/reports/mochawesome
.
Step 2: Extracting Failed Test Data from Mochawesome Reports
This is where the magic begins. We’ll write a Node.js script to read the combined Mochawesome JSON report, identify failed tests, and extract their full titles and spec file paths. This data will be saved to a failed-tests.json
file.
Create **extract-failed-tests.js**
:
// extract-failed-tests.js
const fs = require('fs');
const path = require('path');
const combinedReportPath = path.join(__dirname, 'cypress', 'reports', 'mochawesome', 'combined-report.json');
const outputFailedTestsPath = path.join(__dirname, 'cypress', 'temp', 'failed-tests.json');
const failedTestsData = {
failedTitles: [],
specsToRun: [],
};
try {
if (!fs.existsSync(combinedReportPath)) {
console.warn(`Warning: Mochawesome combined report not found at ${combinedReportPath}. No failed tests extracted.`);
process.exit(0);
}
const report = JSON.parse(fs.readFileSync(combinedReportPath, 'utf8'));
const specsSet = new Set(); // Use a Set to store unique spec paths
report.results.forEach(suiteResult => {
suiteResult.suites.forEach(suite => {
suite.tests.forEach(test => {
if (test.fail) {
let fullTitle = test.fullTitle;
// Clean up the fullTitle by removing extra spaces or unwanted characters
fullTitle = fullTitle.replace(\s+/g, ' ').trim(); // Replace multiple spaces with single space
// Store the full title
failedTestsData.failedTitles.push(fullTitle);
// Extract the spec path from the suite file property
if (suite.file) {
// Ensure path is relative and correctly formatted for Cypress
const relativeSpecPath = path.relative(process.cwd(), suite.file).replace(/g, '/');
specsSet.add(relativeSpecPath);
}
}
});
});
});
failedTestsData.specsToRun = Array.from(specsSet);
// Ensure the output directory exists
fs.mkdirSync(path.dirname(outputFailedTestsPath), { recursive: true });
fs.writeFileSync(outputFailedTestsPath, JSON.stringify(failedTestsData, null, 2), 'utf8');
console.log(`Successfully extracted ${failedTestsData.failedTitles.length} failed tests.`);
console.log(`Failed tests manifest saved to: ${outputFailedTestsPath}`);
} catch (error) {
console.error(`Error extracting failed tests: ${error.message}`);
process.exit(1);
}
How it works:
- It reads
combined-report.json
generated by Mochawesome Merge. - It iterates through all test results, identifies
test.fail
cases. - For each failed test, it captures the
test.fullTitle
(which includes parentdescribe
blocks) and thesuite.file
(the spec path). - It stores unique spec paths in a
Set
to avoid duplicates and writes the finalfailed-tests.json
.
Step 3: Dynamic Cypress Re-Run with @cypress/grep
and Configuration Injection
This is the core of our solution to efficiently re-run only the identified failed tests, addressing the ENAMETOOLONG
error. We'll use @cypress/grep
for intelligent test filtering and dynamically inject its parameters directly into cypress.config.ts
.
1. Install and Configure **@cypress/grep**
:
npm install -D @cypress/grep
# or
yarn add -D @cypress/grep
**cypress.config.ts**
modifications (crucial for dynamic injection):
// cypress.config.ts
import { defineConfig } from 'cypress';
import registerCypressGrep from '@cypress/grep/src/plugin'; // Note: /src/plugin for TS
export default defineConfig({
e2e: {
setupNodeEvents(on, config) {
registerCypressGrep(config); // Register the plugin
// --- START_DYNAMIC_CYPRESS_CONFIG_INJECTION ---
// This section will be managed by run-failed-tests.js
// DO NOT MANUALLY EDIT BETWEEN THESE MARKERS!
// --- END_DYNAMIC_CYPRESS_CONFIG_INJECTION ---
// Override specPattern if dynamically provided by our script
if (config.env.CYPRESS_GREP_SPEC_PATTERN) {
config.specPattern = config.env.CYPRESS_GREP_SPEC_PATTERN;
console.log(`Cypress config: Using dynamic specPattern: ${config.specPattern}`);
}
// Important: Add Mochawesome setup here as well if not already present from Step 1
on('after:run', async (details) => {
if (details.runs) {
const { merge } = require('mochawesome-merge');
const { generate } = require('mochawesome-report-generator');
const mochawesomeJsonFiles = details.runs.map(run =>
run.results && run.results.length > 0 ? run.results\[0\].reporterStats.reportFilename : null
).filter(Boolean);
if (mochawesomeJsonFiles.length > 0) {
const mergedReport = await merge(mochawesomeJsonFiles);
await generate(mergedReport, {
reportDir: 'cypress/reports/mochawesome',
reportFilename: 'combined-report',
overwrite: true,
json: true,
});
}
}
});
return config;
},
specPattern: 'cypress/e2e/***.cy.{js,jsx,ts,tsx}', // Default spec pattern
},
reporter: 'mochawesome',
reporterOptions: {
reportDir: 'cypress/reports/mochawesome-json',
overwrite: false,
html: false,
json: true,
},
});
**cypress/support/e2e.ts**
(for **@cypress/grep**
to work in the browser):
// cypress/support/e2e.ts
import '@cypress/grep'; // Import for side-effects. Do NOT expect a default export here.
// You can also add other global commands or setups here
// import './commands';
2. Create **run-failed-tests.js**
:
This script will read failed-tests.json
, inject the filtering data into cypress.config.ts
, run Cypress, and then clean up the cypress.config.ts
.
// run-failed-tests.js
const fs = require('fs');
const path = require('path');
const { execSync } = require('child\_process');
const failedTestsFilePath = path.join(__dirname, 'cypress', 'temp', 'failed-tests.json');
const cypressConfigPath = path.join(__dirname, 'cypress.config.ts'); // Or .js if applicable
let originalCypressConfigContent = ''; // Stores content to restore
function deleteContentBetweenMarkers(fileContent, startMarker, endMarker) {
const startIndex = fileContent.indexOf(startMarker);
const endIndex = fileContent.indexOf(endMarker);
if (startIndex === -1 || endIndex === -1 || startIndex >= endIndex) {
console.warn(`Markers not found or out of order for cleanup. Proceeding without specific content deletion.`);
return fileContent; // Return original content if markers are missing
}
return fileContent.substring(0, startIndex + startMarker.length) +
'\n' + // Keep the markers on separate lines
fileContent.substring(endIndex);
}
try {
// Read failed tests data
const failedTestsData = JSON.parse(fs.readFileSync(failedTestsFilePath, 'utf8'));
const failedTitles = failedTestsData.failedTitles;
const specsToRun = failedTestsData.specsToRun;
if (!failedTitles || failedTitles.length === 0) {
console.log('No failed test titles found in failed-tests.json. Exiting.');
process.exit(0);
}
// Prepare values for injection
const grepTitles = failedTitles.join(';');
const dynamicSpecPattern = specsToRun.map(p => p.replace(\\/g, '/')); // Normalize paths
// Read original config content
originalCypressConfigContent = fs.readFileSync(cypressConfigPath, 'utf8');
// Prepare content to inject between markers
let injectedContent = `
// Injected by run-failed-tests.js
config.env.grep = ${JSON.stringify(grepTitles)};
config.env.grepFilterSpecs = true; // Optimize grep filtering
`;
if (dynamicSpecPattern.length > 0) {
injectedContent += `
config.env.CYPRESS_GREP_SPEC_PATTERN = ${JSON.stringify(dynamicSpecPattern)};
`;
} else {
// If no specific specs, ensure this env var is not set to default to cypress.config.ts's specPattern
injectedContent += `
delete config.env.CYPRESS\_GREP\_SPEC\_PATTERN;
`;
}
// Inject content into config
const startMarker = '// --- START\_DYNAMIC\_CYPRESS\_CONFIG\_INJECTION ---';
const endMarker = '// --- END\_DYNAMIC\_CYPRESS\_CONFIG\_INJECTION ---';
const startIndex = originalCypressConfigContent.indexOf(startMarker);
const endIndex = originalCypressConfigContent.indexOf(endMarker);
if (startIndex === -1 || endIndex === -1 || startIndex >= endIndex) {
console.error(`Error: Dynamic injection markers not found in '${cypressConfigPath}'.`);
console.error(`Please ensure your cypress.config.ts contains '${startMarker}' and '${endMarker}' within setupNodeEvents.`);
process.exit(1);
}
const beforeInjection = originalCypressConfigContent.substring(0, startIndex + startMarker.length);
const afterInjection = originalCypressConfigContent.substring(endIndex);
const modifiedConfigContent = beforeInjection + injectedContent + afterInjection;
fs.writeFileSync(cypressConfigPath, modifiedConfigContent, 'utf8');
console.log('Temporarily updated cypress.config.ts with test filters.');
// Run Cypress
console.log(`Executing Cypress re-run...`);
execSync(`npx cypress run`, { stdio: 'inherit' }); // No --spec or --env needed on CLI
console.log('\nCypress re-run of failed tests completed.');
} catch (error) {
console.error(`Error during failed test re-run: ${error.message}`);
process.exit(1);
} finally {
// Restore original config content (CRITICAL for clean state)
if (originalCypressConfigContent) {
console.log('Restoring original cypress.config.ts...');
const contentToRestore = deleteContentBetweenMarkers(originalCypressConfigContent, startMarker, endMarker);
fs.writeFileSync(cypressConfigPath, contentToRestore, 'utf8');
console.log('Restored cypress.config.ts to clean state.');
}
}
Step 4: Cache Clearing and Data Cleaning (Best Practices)
Maintaining a clean environment is crucial for reliable test runs.
1. Cypress Cache Clearing:
Cypress maintains a cache of downloaded binaries. Clearing it can resolve issues, especially after updates or corruption.
npx cypress cache clear
2. Data Cleaning (Application Under Test):
This is highly application-specific, but generally, before re-running failed tests, you want your application’s state to be as clean and consistent as possible. This often involves:
- Database Seeding/Truncation: Resetting your database to a known state.
- API Calls: Using API calls to delete or reset data created by previous tests.
- UI Resets: Ensuring your application’s UI is back to a default state (e.g., logging out, clearing local storage).
Integrate these cleaning steps into your package.json
scripts or a separate Node.js script.
Step 5: Orchestrating the Workflow
Now, let’s put it all together in your package.json
scripts for an easy-to-use workflow.
// package.json
{
"name": "cypress-failed-tests-rerun",
"version": "1.0.0",
"description": "Cypress project with failed test re-run capability",
"main": "index.js",
"scripts": {
"test:full": "npm run clean:reports && npx cypress run",
"test:failed": "npm run clean:reports && npx cypress run && npm run extract:failed && npm run run:failed",
"extract:failed": "node extract-failed-tests.js",
"run:failed": "node run-failed-tests.js",
"clean:reports": "npx rimraf cypress/reports cypress/temp && echo 'Cleaned reports and temp directories'",
"cypress:open": "npx cypress open",
"cypress:cache:clear": "npx cypress cache clear"
},
"devDependencies": {
"@cypress/grep": "^3.0.0",
"cypress": "^13.0.0", // Use your Cypress version
"mochawesome": "^7.1.3",
"mochawesome-merge": "^4.3.0",
"mochawesome-report-generator": "^6.2.0",
"rimraf": "^5.0.0", // For cross-platform directory deletion
"typescript": "^5.0.0" // If using TypeScript
}
}
Workflow Explanation:
-
npm run test:full
: Runs your entire Cypress suite. -
npm run test:failed
: -
clean:reports
: Clears previous reports. -
npx cypress run
: Runs the full suite (you could adjust this to run fewer tests initially if you have a known set of flaky ones). -
extract:failed
: Parses the Mochawesome report to find failed tests. -
run:failed
: Executes the Node.js script to dynamically configure Cypress and re-run only the identified failed tests.
Conclusion: A Smarter Approach to Cypress Test Execution
By integrating Mochawesome reporting, dynamic configuration injection with @cypress/grep
, and a well-structured Node.js workflow, you've gained an incredibly powerful tool for managing test failures. No more wasted CI/CD minutes rerunning entire suites for a few flaky tests. This approach not only optimizes your testing pipeline but also contributes to a smoother, more efficient development experience.
Embrace this advanced strategy and spend less time waiting for tests and more time building amazing software!
By Mohamed Said Ibrahim on July 24, 2025.
Exported from Medium on October 2, 2025.
Top comments (0)