DEV Community

Cover image for Test Automation Meets AI — Smarter QA with Playwright + LLMs
Soumik Dhar
Soumik Dhar

Posted on

Test Automation Meets AI — Smarter QA with Playwright + LLMs

Why Bring AI Into Test Automation?

UI test automation is great… until it isn’t.

You’ve seen it:

  • brittle selectors
  • flaky tests
  • endless test case writing
  • log hunting after failures

Large Language Models (LLMs) can help you write, maintain, and debug tests faster.

This post shows how to blend Playwright + Cucumber with LLMs to get a smarter QA workflow.

Playwright dashboard

Core Idea

Instead of writing every test from scratch and manually debugging, use an AI assistant (Gemini, GPT, Claude, etc.) to:

  • Generate test cases from requirements or PR descriptions.
  • Suggest robust selectors.
  • Analyze failed logs and propose fixes.
  • Refactor flaky steps automatically.

Think of it as pair testing with AI.

Workflow Overview

Workflow

  1. Feed User Stories / Requirements → LLM suggests test scenarios.
  2. Convert to Playwright + Cucumber Steps with AI help.
  3. Run Tests — AI assists with debugging flaky steps.
  4. Refine & Commit.

1. Generate Test Cases Automatically

Open Google AI Studio or ChatGPT with your project context.

Generate Playwright login test cases for:
- Email/password login
- OTP fallback
- SSO (Google + GitHub)
- Invalid email format
Enter fullscreen mode Exit fullscreen mode

Output might look like:

Feature: Login

  Scenario: Successful login with email and password
    Given I am on the login page
    When I enter a valid email and password
    Then I should land on the dashboard

  Scenario: OTP login fallback
    Given I enter a valid email but wrong password
    Then I should be prompted for OTP
Enter fullscreen mode Exit fullscreen mode

Paste this into your tests/features/login.feature.

2. Implement Step Definitions With AI Help

Your pages/login/steps.js might start like:

const { Given, When, Then } = require('@cucumber/cucumber');
const { expect } = require('@playwright/test');
const loc = require('../../locators/login.json');

Given('I am on the login page', async function () {
  await this.page.goto(process.env.BASE_URL + '/login');
});

When('I enter a valid email and password', async function () {
  await this.page.fill(loc.email, process.env.TEST_EMAIL);
  await this.page.fill(loc.password, process.env.TEST_PASSWORD);
  await this.page.click(loc.submit);
});

Then('I should land on the dashboard', async function () {
  await expect(this.page.locator(loc.dashboard)).toBeVisible({ timeout: 30000 });
});
Enter fullscreen mode Exit fullscreen mode

If a locator breaks, ask your LLM:

Suggest a more stable Playwright selector for the login submit button

3. Debug Failures with AI

Instead of digging logs, paste failure output into your AI tool:

Error: strict mode violation: locator("button[type=submit]") resolved to 2 elements
Enter fullscreen mode Exit fullscreen mode

Prompt:

Suggest a unique Playwright selector based on this HTML snippet …

4. Keep Your Tests Clean

Ask the model to refactor:

Refactor these Playwright steps to use page objects and reduce duplication.

It can rewrite step definitions into reusable page objects.

Tools & Integrations

  • VS Code / Windsurf AI plugins — inline suggestions & refactoring.
  • Google AI Studio — better prompt engineering.
  • LangChain — automate repetitive log analysis.

Key Wins

  • Faster test case creation
  • Robust selectors → less flakiness
  • Easier debugging
  • Cleaner, maintainable test code

Test Away!

TL;DR

Bring AI into your QA loop:

  • Generate tests from requirements
  • Debug failures faster
  • Refactor for maintainability

Your Playwright + Cucumber stack just got smarter.

Try it:

Comment below if you’re experimenting with AI-powered testing!

Top comments (0)