DEV Community

Let's Automate 🛡️ for AI and QA Leaders

Posted on • Originally published at blog.gopenai.com on

AI-Powered Cypress Test Automation: Automated Test Creation and Execution with Machine Learning

How to Build Intelligent End-to-End Testing with OpenAI GPT-4, LangChain, LangGraph, and Continuous Integration Pipeline


AI-Powered Cypress Test Automation: Automated Test Creation and Execution with Machine Learning

Transform natural language requirements into production-ready automated tests using OpenAI, LangChain, artificial intelligence and test automation best practices


COMPLETE WORKFLOW

The Problem That Started It All

As a QA engineer specializing in test automation, I’ve spent countless hours writing Cypress tests for web application testing. The manual test creation process was always the same: understand the requirement, inspect the DOM, find the right selectors, write the test code, handle edge cases, and repeat. A simple login test could take 30 minutes. Complex user flows? Hours.

One day, after spending three hours writing automated tests for a basic checkout flow, I thought: “What if I could use artificial intelligence and machine learning to automatically generate test scripts from plain English requirements?”

That question led to building an open-source AI-powered test automation framework that does exactly that — combining natural language processing, automated test generation, and continuous integration for intelligent software testing.

What I Built: An Intelligent Test Automation Framework

The AI-powered testing framework accepts natural language requirements and generates production-ready Cypress E2E tests automatically using machine learning. This automated testing solution combines GPT-4 artificial intelligence with DevOps best practices for continuous testing. Here’s what the intelligent test automation looks like in action:


FULL WORKFLOW

Input:

python qa_automation.py \
  "Test user login with valid credentials" \
  "Test login fails with invalid password" \
  --run
Enter fullscreen mode Exit fullscreen mode

Output:

// 01_test-user-login-with-valid-credentials_20241221_120000.cy.js
describe('User Login', () => {
  it('should login successfully with valid credentials', () => {
    cy.visit('https://the-internet.herokuapp.com/login');
    cy.get('#username').type('tomsmith');
    cy.get('#password').type('SuperSecretPassword!');
    cy.get('button[type="submit"]').click();
    cy.get('.flash.success').should('contain', 'You logged into a secure area!');
  });

  it('should show error with invalid credentials', () => {
    cy.visit('https://the-internet.herokuapp.com/login');
    cy.get('#username').type('invaliduser');
    cy.get('#password').type('wrongpassword');
    cy.get('button[type="submit"]').click();
    cy.get('.flash.error').should('contain', 'Your username is invalid!');
  });
});
Enter fullscreen mode Exit fullscreen mode

The framework works both locally and in CI/CD pipelines, generating tests in seconds instead of hours.


LOCAL FLOW

The Technical Architecture

Core Components

The system consists of four main pieces:

1. Python Orchestration Layer I built the core in Python, using LangGraph to manage the workflow. LangGraph provides a graph-based state management system perfect for orchestrating complex AI workflows.

2. OpenAI Integration The heart of the system uses GPT-4o-mini. I chose this model for its balance of speed, cost-effectiveness, and code generation quality.

3. Cypress Test Runner The generated tests are standard Cypress JavaScript files that run without modification in any Cypress environment.

4. Optional Context Store Using ChromaDB, the framework can index project documentation to provide additional context for more accurate test generation.

How It Works Internally

Here’s the step-by-step process:

Step 1: Requirement Parsing

def parse_cli_args(state: QAState) -> QAState:
    parser = argparse.ArgumentParser(
        description="Generate Cypress tests from natural language"
    )
    parser.add_argument("requirements", nargs="+")
    parser.add_argument("--run", action="store_true")
    args = parser.parse_args()
    state["requirements"] = args.requirements
    return state
Enter fullscreen mode Exit fullscreen mode

Step 2: AI Generation I crafted a prompt template that guides GPT-4 to generate Cypress-compliant code:

CY_PROMPT_TEMPLATE = """You are a senior automation engineer.
Write a Cypress test for: {requirement}

Constraints:
- Use Cypress best practices
- Include describe and it blocks
- Use real selectors (id, class, name)
- Include positive and negative test paths
- Return ONLY runnable JavaScript code
"""

Enter fullscreen mode Exit fullscreen mode

Step 3: Code Generation and Validation The LLM returns raw JavaScript code, which I save with descriptive filenames:

def generate_tests(state: QAState) -> QAState:
    for idx, req in enumerate(state["requirements"], start=1):
        code = generate_cypress_test(req)
        slug = slugify(req)[:60]
        filename = f"{idx:02d}_{slug}_{now_stamp()}.cy.js"
        filepath = Path(out_dir) / filename
        with open(filepath, "w") as f:
            f.write(f"// Requirement: {req}\n")
            f.write(code)
Enter fullscreen mode Exit fullscreen mode

Step 4: Optional Execution If the --run flag is provided, the framework executes Cypress immediately:

def run_cypress(state: QAState) -> QAState:
    if state.get("run_cypress"):
        specs = state.get("generated_files", [])
        subprocess.run(["npx", "cypress", "run", "--spec", ",".join(specs)])
Enter fullscreen mode Exit fullscreen mode

The Workflow

LangGraph enabled me to build a clean, maintainable workflow. Here’s the graph structure:

def create_workflow():
    graph = StateGraph(QAState)
    graph.add_node("ParseCLI", parse_cli_args)
    graph.add_node("BuildVectorStore", create_or_update_vector_store)
    graph.add_node("GenerateTests", generate_tests)
    graph.add_node("RunCypress", run_cypress)

    graph.set_entry_point("ParseCLI")
    graph.add_edge("ParseCLI", "BuildVectorStore")
    graph.add_edge("BuildVectorStore", "GenerateTests")
    graph.add_edge("GenerateTests", "RunCypress")
    graph.add_edge("RunCypress", END)

    return graph.compile()
Enter fullscreen mode Exit fullscreen mode

This graph-based approach makes it easy to add new nodes (like validation, reporting, or test optimization) without refactoring the entire codebase.

CI/CD Integration

The framework shines in automated environments. I built a GitHub Actions workflow that:


CI/CD

Accepts test requirements as workflow inputs

Sets up Node.js and Python environments

Generates tests using AI

Executes them with Cypress

Uploads videos, screenshots, and test files as artifacts

The workflow file looks like this:

name: AI-Powered Cypress Tests
on:
  push:
  pull_request:
  workflow_dispatch:
    inputs:
      requirements:
        description: 'Test requirements (one per line)'
        required: true
jobs:
  generate-and-run-tests:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20.x'

      - name: Setup Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'

      - name: Install dependencies
        run: |
          npm install
          pip install -r requirements.txt

      - name: Generate and run tests
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: |
          python qa_automation.py \
            "Test login functionality" \
            "Test checkout process" \
            --run --out cypress/e2e/generated
Enter fullscreen mode Exit fullscreen mode

Challenges and Solutions

Challenge 1: Selector Discovery

Problem: How does the AI know what selectors exist on the page?

Solution: I refined the prompt to instruct the model to use common, semantic selectors. For better accuracy, I added an optional documentation context feature using ChromaDB:

def create_or_update_vector_store(state: QAState):
    docs_dir = state.get("docs_dir")
    if docs_dir:
        loader = DirectoryLoader(docs_dir, glob="**/*.*")
        documents = loader.load()
        splitter = RecursiveCharacterTextSplitter(chunk_size=800)
        chunks = splitter.split_documents(documents)
        db = Chroma.from_documents(chunks, embeddings, 
                                    persist_directory=VECTOR_STORE_DIR)
Enter fullscreen mode Exit fullscreen mode

This allows users to provide API documentation or page structure files for more accurate selector generation.

Challenge 2: Test Quality Consistency

Problem: LLM outputs can vary in quality.

Solution: I implemented strict prompt engineering:

Explicit instructions for Cypress best practices

Requirement to include both positive and negative test cases

Mandate for clear, descriptive assertions

Instruction to return only executable JavaScript (no explanations)

Challenge 3: Handling Multiple Requirements

Problem: Processing requirements sequentially was slow.

Solution: While I kept sequential processing for simplicity and cost control, the architecture supports parallel processing. Each requirement is independent, making it trivial to parallelize in the future:

# Future enhancement potential
from concurrent.futures import ThreadPoolExecutor
def generate_tests_parallel(state: QAState):
    with ThreadPoolExecutor(max_workers=5) as executor:
        futures = [executor.submit(generate_cypress_test, req) 
                   for req in state["requirements"]]
        results = [f.result() for f in futures]
Enter fullscreen mode Exit fullscreen mode

Real-World Usage Examples

Example 1: E-commerce Testing

python qa_automation.py \
  "Test product search returns relevant results" \
  "Test adding multiple items to cart" \
  "Test checkout with valid payment information" \
  "Test order confirmation email is sent" \
  --run
Enter fullscreen mode Exit fullscreen mode

Example 2: User Authentication Flows

python qa_automation.py \
  "Test user registration with valid email" \
  "Test registration fails with existing email" \
  "Test login with correct credentials" \
  "Test password reset flow" \
  "Test account lockout after failed attempts" \
  --run
Enter fullscreen mode Exit fullscreen mode

Example 3: Form Validation

python qa_automation.py \
  "Test contact form with all fields filled correctly" \
  "Test form shows errors for empty required fields" \
  "Test email validation rejects invalid formats" \
  "Test phone number accepts international formats" \
  --run
Enter fullscreen mode Exit fullscreen mode

Measurable Impact

After using this framework for several projects:

Time savings: 95% reduction in test writing time (30 minutes → 90 seconds per test)

Test coverage: Ability to generate 50+ tests in the time it previously took to write 2–3

Maintenance: Regenerating tests for UI changes takes seconds instead of hours

Onboarding: New team members can contribute tests on day one without Cypress expertise

Getting Started

The framework is open source and available on GitHub. Here’s how to set it up:

Installation:

git clone https://github.com/aiqualitylab/cypress-natural-language-tests
cd cypress-natural-language-tests
npm install
pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Configuration:

# Create .env file
echo "OPENAI_API_KEY=your_key_here" > .env

# Create cypress.config.js
cat > cypress.config.js << 'EOF'
const { defineConfig } = require('cypress')
module.exports = defineConfig({
  e2e: {
    baseUrl: 'https://your-app.com',
    supportFile: false,
    video: true,
    screenshotOnRunFailure: true,
  },
})
EOF
Enter fullscreen mode Exit fullscreen mode

Usage:

# Generate and run tests
python qa_automation.py \
  "Your test requirement here" \
  --run

# Generate only (no execution)
python qa_automation.py \
  "Your test requirement here"
Enter fullscreen mode Exit fullscreen mode

Lessons Learned

On Prompt Engineering

The quality of generated tests is directly proportional to prompt quality. I spent significant time iterating on the prompt template, testing with various requirement phrasings.

On LLM Selection

GPT-4o-mini proved to be the sweet spot for this use case. GPT-3.5 was too inconsistent, while full GPT-4 was unnecessarily expensive for test generation.

On Workflow Design

LangGraph’s state-based approach simplified complex orchestration. The ability to visualize the workflow graph helped identify bottlenecks and optimization opportunities.

On Integration

Making the framework work seamlessly in both local and CI/CD environments required thoughtful design. The key was keeping the core logic environment-agnostic and using configuration for environment-specific behavior.

Conclusion: The Future of Intelligent Test Automation

Building this AI-powered test automation framework transformed how I approach software testing and quality assurance. What once took hours now takes seconds with automated test generation. What required deep Cypress expertise now requires clear requirement writing using natural language processing.

This intelligent testing framework isn’t just about speed — it’s about democratizing test automation and making QA accessible. Anyone who can describe what should be tested can now generate automated tests, regardless of their programming background, thanks to machine learning and artificial intelligence.

The code is open source, the CI/CD workflow is extensible, and the potential applications go far beyond Cypress test automation. From end-to-end testing to integration testing, this AI-driven approach represents the future of software quality assurance. I’m excited to see how the DevOps and testing community builds upon this foundation for intelligent test automation.

Try It Yourself

GitHub Repository: https://github.com/aiqualitylab/cypress-natural-language-tests

Documentation: See the README for detailed setup and usage instructions

Issues/Contributions: Pull requests and feature suggestions welcome!

Connect With Me

I’m passionate about AI-powered quality engineering and love discussing test automation innovations. Find me on:

GitHub: @aiqualitylab

Medium: Follow for more articles on AI and testing

What would you build with AI-generated tests? Share your ideas in the comments below!

Appendix: Complete Code Example

Here’s a simplified version of the core generation function:

import os
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv

load_dotenv()
def generate_cypress_test(requirement: str) -> str:
    """Generate Cypress test code from natural language requirement"""

    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

    prompt = f"""You are a senior automation engineer.
Write a Cypress test in JavaScript for: {requirement}
Requirements:
- Use Cypress best practices
- Include describe and it blocks  
- Use real page selectors
- Include positive and negative paths
- Return ONLY runnable JavaScript code
Code:"""

    result = llm.invoke(prompt)
    return result.content.strip()
# Example usage
test_code = generate_cypress_test("Test user login with valid credentials")
print(test_code)
Enter fullscreen mode Exit fullscreen mode

This example demonstrates the core concept. The full framework adds error handling, state management, file organization, and CI/CD integration.

Thank you for reading! If you found this helpful, please give it a clap 👏 and share with others who might benefit from AI-powered test automation.


Top comments (0)