I've been building software for over a decade.
Agency work, startups, enterprises, freelance projects. Different industries, different stacks, different team cultures.
But some things are always the same.
The developer staying late because a bug slipped through. The PM explaining to a client why something is broken in production. The QA engineer who knew something was off but didn't have enough time to verify it properly.
Nobody wanted that outcome. Everyone was doing their best. The process just wasn't set up to prevent it.
It's Not a People Problem
Most software quality problems aren't about people not caring. They're structural.
The incentives push toward speed. The tooling makes quality expensive. So quality loses — not because teams are negligent, but because the math doesn't work in quality's favor.
Think about how a typical sprint plays out:
- PM scopes the work. Takes longer than planned. Deadline stays fixed.
- Dev builds it. Takes longer than planned. Deadline stays fixed.
- QA gets what's left. Which is usually not enough.
Nobody decided this was acceptable. It just kept happening until it felt normal.
And at the end of that chain, the QA engineer is left with a choice: do a partial job under pressure, or hold the line and become "the reason we missed the deadline."
That's not a QA problem. That's a system problem.
Can the System Be Fixed?
Honestly? I'm not sure.
The incentives, the deadline culture, the way organizations prioritize shipping over stability — those are deeply embedded. No tool is going to fix a company culture that treats QA as optional.
But here's what I do think can be fixed:
What happens when that huge scope lands on QA's table with half the time it needs.
That's the specific, concrete, solvable problem. Not the politics. Not the culture. Just — when the pressure is on and the time is short, how do we make sure the critical flows still get tested?
The Technical Reality of "No Time to Test"
When QA time gets squeezed, what actually gets cut?
Usually it's automation. Manual smoke testing survives because it feels faster in the moment. Automated test scripts get deferred because writing them takes time.
And that deferral compounds. The longer you go without automation, the more ground you have to make up. Until eventually the backlog is so large it never gets addressed.
Here's what a basic Playwright test for a checkout flow looks like:
import { test, expect } from '@playwright/test';
test('complete checkout with valid card', async ({ page }) => {
await page.goto('/cart');
await page.getByRole('button', { name: 'Proceed to checkout' }).click();
await page.getByLabel('Card number').fill('4242424242424242');
await page.getByLabel('Expiry').fill('12/26');
await page.getByLabel('CVC').fill('123');
await page.getByRole('button', { name: 'Pay now' }).click();
await expect(page.getByText('Order confirmed')).toBeVisible();
});
That's one flow. One happy path. Under pressure, this is the test that either gets written in 20 minutes or gets pushed to "next sprint."
Most of the time, it gets pushed.
What We Built
This is the exact problem we set out to solve with Lama.
Lama is an AI QA agent that navigates your app in a real browser and generates native Playwright, Cypress, or Selenium test code from a plain English description. You describe the flow. It writes the test.
No proprietary format. No lock-in. Real code that lives in your repo, runs in your CI, and works whether or not you keep using Lama.
The goal isn't to fix broken team culture. The goal is to make sure that when the pressure is on and the scope is huge — quality doesn't have to be the thing that pays the price.
We just launched as research preview. Free to use, credits top up as you go.
If you've felt this pressure firsthand — I'd genuinely love to hear how your team handles it.
Top comments (0)