QA Is Always the Breaking Point. So We Built Something About It.
There's a pattern I've seen in almost every software team I've worked with over the past decade.
PM scopes the project. Runs over. That's fine.
Devs build it. Runs over. Also fine.
Then it lands on QA - and suddenly every hour counts.
The engineers who had nothing to do with the delays are now the reason the deadline slips. The people absorbing the entire weight of everyone else's overruns are the ones getting pressured to cut corners.
I've watched good QA engineers burn out over this. Not because the work was too hard. Because the structure was unfair.
The Technical Side of the Problem
Beyond the politics, there's a real technical burden that doesn't get talked about enough.
Writing automated test scripts is genuinely tedious work.
You need to identify selectors. Write assertions. Handle async timing. Cover edge cases. And then - the moment a developer renames a class or restructures a component - half your test suite breaks and you're back to square one.
Here's what a basic Playwright login test looks like when written by hand:
import { test, expect } from '@playwright/test';
test('successful login with valid credentials', async ({ page }) => {
await page.goto('/login');
await page.getByLabel('Email').fill('jane@acme.com');
await page.getByLabel('Password').fill('s3cureP@ss');
await page.getByRole('button', { name: 'Sign in' }).click();
await expect(page).toHaveURL(/dashboard/);
await expect(page.getByText('Welcome back, Jane')).toBeVisible();
});
test('shows error for invalid password', async ({ page }) => {
await page.goto('/login');
await page.getByLabel('Email').fill('jane@acme.com');
await page.getByLabel('Password').fill('wrongpass');
await page.getByRole('button', { name: 'Sign in' }).click();
await expect(page.getByText('Invalid credentials')).toBeVisible();
await expect(page).toHaveURL(/login/);
});
That's a simple login flow. Two test cases. Experienced engineer, maybe 15-20 minutes to write properly, including thinking through the edge cases.
Now multiply that across an entire application. Every feature. Every sprint. Every time the UI changes.
That's where the hours go.
What We Built
This is the problem we set out to solve with Lama - an AI QA agent that navigates your app in a real browser and generates test code from a plain English description.
Instead of writing the above manually, you tell Lama:
"Test the login page. Verify successful login redirects to the dashboard. Verify that an invalid password shows an error and stays on the login page."
Lama opens a real browser, navigates to your app, interacts with it, reads the DOM, and generates the test code above - ready to commit and run in your CI.
No proprietary format. No wrapper library. Pure Playwright, Cypress, or Selenium - whichever you already use.
The Part I'm Most Proud Of
The code generation is useful. But the knowledge system is the real unlock.
Every time Lama interacts with your app, it builds a map - pages, flows, components, behaviors. That knowledge persists across sessions. On team plans, it's shared across the whole team.
So when a new sprint lands and everyone's staring at QA again, your engineer isn't starting from scratch. Lama already knows the app. It just needs to know what changed.
QA knowledge that used to live in one engineer's head - or worse, nowhere - now compounds over time.
Try It
We just launched as research preview.
The app is free to use - you top up AI credits as you go, no subscription required. The free tier is enough to get a real feel for what it can do.
If you're a QA engineer tired of being last in the pipeline and first to be blamed - or a dev who just wants to quickly verify your own implementation actually works - give it a shot.
🔗 lamaqa.com
Honest feedback welcome.
Top comments (0)