DEV Community

Matthias Bruns
Matthias Bruns

Posted on • Originally published at appetizers.io

A Test Automation Strategy That Actually Works

Most Test Strategies Fail at the Seams

Teams ship with 90% unit test coverage and still get burned in production. The reason is always the same: the tests verify individual components in isolation, but nobody tests how those components interact.

You've seen it. A service changes its response format. The consumer's unit tests don't catch it because they mock the dependency. The integration test suite — if it exists — runs nightly and nobody looks at it until something breaks on a Friday afternoon.

A real test automation strategy isn't about maximizing coverage numbers. It's about maximizing confidence per CI minute spent.

The Pyramid, the Trophy, and What Actually Matters

Mike Cohn introduced the test automation pyramid over a decade ago: lots of unit tests at the base, fewer integration tests in the middle, a thin layer of E2E tests on top. The reasoning was economic — unit tests were fast and cheap, E2E tests were slow and flaky.

Kent C. Dodds later proposed the testing trophy, arguing that integration tests give you the highest return on investment. His point: "Write tests. Not too many. Mostly integration."

Both models are useful mental shortcuts, but neither is a strategy. A strategy answers: what do we test, at which layer, and what do we skip?

Here's the framework we use:

  1. Unit tests for pure logic — calculations, transformations, validation rules. If it has no dependencies, unit-test it.
  2. Integration tests for service boundaries — API contracts, database queries, message handlers. This is where most bugs live.
  3. E2E tests for critical user paths only — checkout, login, the three flows that generate revenue.
  4. Static analysis as the foundation — TypeScript strict mode, linters, formatters. Catches entire categories of bugs before tests even run.

Integration Tests: Where the ROI Lives

The biggest gap in most test suites is the integration layer. Teams either skip it entirely or write integration tests that are really just unit tests with extra steps.

A proper integration test for a backend service:

  • Spins up the actual database (use Testcontainers — it runs in CI just fine)
  • Calls the real HTTP endpoint or gRPC method
  • Asserts on the actual response, including status codes, headers, and body shape
  • Cleans up after itself

In Go, this looks like a TestMain that starts a Postgres container, runs migrations, and tears down after the suite. In Node.js/TypeScript, it's a beforeAll that boots the server against a real database.

The key rule: mock at the boundary of your system, not inside it. Mock the third-party payment API. Don't mock your own repository layer.

Testing Database Queries

If you use an ORM or query builder, your unit tests don't verify the actual SQL. The query might be syntactically wrong, or the ORM might generate something unexpected.

Test against a real database:

func TestCreateUser(t *testing.T) {
    db := setupTestDB(t) // Testcontainers
    repo := NewUserRepository(db)
    ctx := context.Background()

    user, err := repo.Create(ctx, CreateUserInput{
        Email: "test@example.com",
        Name:  "Test User",
    })

    assert.NoError(t, err)
    assert.NotEmpty(t, user.ID)
    assert.Equal(t, "test@example.com", user.Email)
}
Enter fullscreen mode Exit fullscreen mode

This catches schema mismatches, constraint violations, and migration issues that no amount of mocking will find.

E2E Tests: Less Is More

E2E tests are expensive to write, slow to run, and prone to flakiness. But for critical paths, they're irreplaceable.

The modern tooling has dramatically improved. Playwright handles cross-browser testing, auto-waits for elements, and provides tracing for debugging failures. It's the clear choice for web E2E in 2026.

Rules for E2E tests that don't become a maintenance burden:

  1. Test user journeys, not features. "User signs up, creates a project, invites a teammate" — not "the signup button changes color on hover."
  2. Use realistic but controlled data. Seed the database before each test. Never depend on shared state between tests.
  3. Set a hard limit. We cap E2E suites at 20 minutes. If they take longer, you're testing too much at this layer.
  4. Quarantine flaky tests immediately. A flaky E2E test that everyone ignores is worse than no test. Mark it, fix it, or delete it.

CI Integration: Speed Is a Feature

A test suite that takes 45 minutes to run won't be run on every push. Developers will batch changes, skip the suite, and merge on faith.

Target CI times:

  • Lint + type check: Under 2 minutes
  • Unit tests: Under 3 minutes
  • Integration tests: Under 10 minutes (parallelize by test package)
  • E2E tests: Under 15 minutes (run only on PRs targeting main, not on every commit)

Parallelize aggressively. Most CI providers give you multiple workers. Split your integration suite by package or module and run them concurrently.

Cache smart: database container images, npm/Go module caches, built test fixtures. Every second saved compounds across hundreds of runs.

The Strategy on One Page

Layer What to test Tooling Run when
Static Types, lint rules, formatting TypeScript strict, ESLint, go vet Every commit
Unit Pure functions, business logic Jest, go test, Vitest Every commit
Integration API boundaries, DB queries, message handlers Testcontainers, supertest, httptest Every commit
Contract Service-to-service agreements Pact On PR + provider deploy
E2E Critical user journeys (3–5 max) Playwright PR to main

This isn't theoretical. It's the setup we run on production services. Our non-E2E pipeline stays under 15 minutes, and the full PR pipeline remains fast through parallelization. The confidence level is high enough to deploy on green.

Start Where It Hurts

Don't try to build the perfect test suite in a sprint. Start by asking: where did our last three production bugs come from?

If they were integration issues — invest in integration tests. If they were regressions in user flows — add E2E for those specific flows. If they were type errors — turn on strict mode.

The best test automation strategy is the one that prevents the bugs you're actually shipping.

Top comments (0)