DEV Community

Enis Kovac
Enis Kovac

Posted on

We'll Test It Properly Once Things Slow Down

"We'll Test It Properly Once Things Slow Down"
Enter fullscreen mode Exit fullscreen mode

They won't. Here's what actually happens.


I've heard this line more times than I can count.

In startups burning through runway. In enterprises with dedicated QA teams. In agencies with 50 developers and a process document nobody reads.

"We'll test it properly once things slow down."

Things never slow down.

So testing gets squeezed. Then skipped. Then something reaches production that shouldn't have, and suddenly the whole team has time. Time for the incident call. Time for the hotfix. Time for the post-mortem. Time for the client apology email that someone has to carefully word so it doesn't sound like what it actually is.

A bug that a 10-minute test would have caught.


Why this keeps happening

It's not laziness. It's not incompetence.

It's that writing proper automated tests takes time — time that feels like it's coming out of shipping time, even though skipping it costs far more later.

A typical Playwright test for a login flow:

import { test, expect } from '@playwright/test';

test('successful login', async ({ page }) => {
  await page.goto('/login');
  await page.getByLabel('Email').fill('user@example.com');
  await page.getByLabel('Password').fill('password123');
  await page.getByRole('button', { name: 'Sign in' }).click();
  await expect(page).toHaveURL(/dashboard/);
});
Enter fullscreen mode Exit fullscreen mode

That's one test. One flow. One happy path.

Now multiply that across every feature, every sprint, every time the UI changes and half your selectors break. You can see why teams push it to "later."

The tooling hasn't kept up with the pace teams are expected to ship at. So testing becomes the thing you do when you have time, which is never.


The only fix that actually works

The teams that consistently ship with confidence aren't the ones with more time.

They're the ones who made testing cheap enough to do every sprint without thinking about it.

That's the problem we built Lama to solve — an AI QA agent that navigates your app in a real browser and generates native Playwright, Cypress, or Selenium test code from a plain English description.

You describe the flow. It writes the test. You commit it, run it in CI, move on.

No proprietary format. No lock-in. Just test code that lives in your repo like any other code your team writes.

We just launched public beta, free to use, credits top-up as you go.


"We'll test it later" only works as a strategy until it doesn't.

What's the worst bug you've ever shipped because testing got pushed? Drop it in the comments — genuinely curious.

Top comments (0)