DEV Community

Roman
Roman

Posted on

Things to Remember When You Test Your React App

Many teams I have worked with had the same problem:
They had tests, but nobody trusted them.

The coverage report was green. The CI pipeline was slow and running for a while to pass your PR check. And yes, bugs still slipped to production - while developers quietly skipped running tests because they knew half of them were flaky or meaningless.

That’s the real danger: once team stops trusting tests, the whole test suite becomes dead weight, it’s just extra code to maintain which gives you zero confidence.

And everyone learned the same lesson: a test suite that isn’t trusted is even worse than having no tests at all.

So how do we avoid that? What makes tests trustworthy instead of just existing?

Here are some lessons I’ve learned the hard way.

Coverage Doesn’t Equal Quality

I’ve been on projects where hitting “90% coverage” was treated like a milestone. The dashboard turned green, the manager smiled, and… we still shipped broken features.

Why? Because coverage only measures lines executed, not whether a test proves the app works.

I’ll give you an example.

it("renders the login form", () => {
  const { container } = render(<LoginForm onSubmit={() => {}} />);
  expect(container.querySelector("input[name='email']")).toBeVisible();
});
Enter fullscreen mode Exit fullscreen mode

That test executes almost every line in the component, so coverage shoots up. But if someone removed the submit logic tomorrow, it would still pass. The suite looks great on paper - and yet the feature is broken.

What you really need are intention-driven tests.

it("submits user login", () => {
  const handleSubmit = vi.fn();
  render(<LoginForm onSubmit={handleSubmit} />);

  fireEvent.change(screen.getByPlaceholderText(/email/i), { target: { value: "user@example.com" } });
  fireEvent.change(screen.getByPlaceholderText(/password/i), { target: { value: "password" } });
  fireEvent.click(screen.getByRole("button", { name: /submit/i }));

  expect(handleSubmit).toHaveBeenCalledWith({ email: "user@example.com", password: "password" });
});
Enter fullscreen mode Exit fullscreen mode

This test doesn’t just run code. It protects the user intention: if I type credentials and hit submit, does the app behave correctly?

Coverage is a useful metric, but it’s not the goal. Trust comes from testing meaning, not lines.

How You Select Element - Matters

Another hidden trust killer (and performance) is how you select elements.

Early tests often relied on brittle selectors:

expect(screen.querySelector(".btn-primary")).toBeVisible();
Enter fullscreen mode Exit fullscreen mode

The problem? CSS class changes → test fails, even though the UI still works.

Or overusing everywhere data-testid attributes.

expect(screen.getByTestId("submit-btn").toBeVisible()
Enter fullscreen mode Exit fullscreen mode

Does it work - yes? But I often see how people keep brining huge constants file into their bundles with contains and creates necessary values for test ids.

That’s why React Testing Library recommends semantic queries:

  • getByRole → “there’s a button”
  • getByLabelText → “the input is labeled Email”
  • getByText → “the user can read this text”

It forces you to test the UI the way a user interacts with it.
And again - that’s what builds trust.

What (and When) to Mock

This is where many teams lose trust in their tests. Mocking is powerful, but it also comes with great responsibility.

Mock too little → tests hit the network or the database → slow, flaky, unreliable.
Mock too much → you’re not testing reality anymore, just your own fakes.

The rules I follow:

  • External systems and network → always mock (network calls, DB, filesystems, etc.)
  • Pure utilities → don’t mock (just use directly, they directly affect you output and expectations)
  • Internal modules / components → here you need to be careful. You need to keep balance here to avoid creating gap between what you need to test and how your app actually runs.

Once we had and issue on a project where the whole authService module was mocked:

vi.mock("../authService", () => {
    login: vi.fn().mockResolvedValue({ token: "fake" })
})
Enter fullscreen mode Exit fullscreen mode

The test suite was always passing, but one day we got critical production issue with authorization management. None of the tests caught it - because test wasn’t using the actual authService , it was testing it’s own mock, and when output of service changes, everything was still green.

The solution? Don’t mock the service itself. Mock the external source the service depends on. For example, intercepting fetch calls with MSW. You still avoid real network calls, but you keep your real service logic intact.

That’s the key: mock at system boundaries, not inside your app’s core.

Unit vs Integration Tests (Do We Still Believe in the Pyramid?)

Remember the “testing pyramid”? Tons of unit tests at the bottom, fewer integration tests in the middle, and a few e2e tests on top.

In React projects, that pyramid often collapsed, We ended up with thousands of tiny unit tests for hooks and buttons… but they don’t really catch bugs. They don’t capture the whole picture of what is going on, they live in their small incapsulated world.

Most failures I’ve seen happen between units:

  • Data not passed correctly.
  • Validation breaking.
  • API responses mishandled.
  • Promises unhandled.

That’s why many developers now lean toward the Testing Trophy:

  • Unit tests for pure logic
  • Integration tests for flows (user fills form → API is called → success message shows)
  • e2e tests for critical paths

I won’t say the pyramid is dead - but in practice, an integration-first mindset has given me way more trust than an ocean of isolated unit tests.

Snapshot Tests: A False Safety Net

Snapshots felt like magic at first. One line of code, instant coverage:

expect(container).toMatchSnapshot();
Enter fullscreen mode Exit fullscreen mode

In reality:

  • Snapshots broke constantly on minor markup changes.
  • Diffs were huge and unreadable.
  • Developers started clicking “update snapshot” without looking.

At that point, the tests weren’t protecting anything. They were just busywork.

The truth is simple: snapshots don’t prove behavior. They don’t tell you if the app actually works for the user.

You’ll always get more value from writing explicit assertions:

expect(screen.getByRole("heading", { name: /welcome/i })).toBeVisible();
Enter fullscreen mode Exit fullscreen mode

Determinism vs Flakiness

I’ve watched test suites collapse under flakiness. Once developers stop trusting the suite, they stop running it. And from there, it dies.

The culprits are always the same:

  • Real timers (setTimeout, setInterval)
  • Random values (UUIDs, Math.random)
  • Async race conditions
  • Overuse of waitFor with arbitrary timeouts

The cure is determinism:

  • Freeze time (vi.setSystemTime).
  • Mock randomness (vi.mock("uuid")).
  • Use fake timers.

It’s not about making tests “less real.” It’s about making them reliable. If a test sometimes passes and sometimes fails, it’s worse than no test at all.

Flaky Tests: Detect and Eliminate

Once flaky tests creep in, trust goes out the window. That’s why finding and fixing them fast is critical.

Tools help:

  • Jest → --detectOpenHandles, --listTestsByDuration
  • Vitest → --slowTestThreshold

But prevention is even better:

  • Avoid leaking state between tests.
  • Don’t rely on arbitrary timeouts.
  • Always wait for explicit conditions (like findByRole).

When I see a flaky test in CI, I don’t ignore it. I either quarantine it or fix it immediately. Because the longer a flaky test lives in main, the faster your entire suite loses credibility.

Final Thoughts

The history of frontend testing is a history of false confidence:

  • Coverage numbers that didn’t mean safety.
  • Snapshots that didn’t mean stability.
  • Mocks that didn’t mean reality.

The lesson? Tests aren’t about numbers. They’re about trust.

A good test answers two questions:

  • If it fails, do I know something important is broken?
  • If it passes, do I trust the feature works?

If the answer is “no,” the test is noise.

That’s what I remind myself every time I write tests. Because at the end of the day, I don’t want more tests. I want a suite my team actually believes in.

Top comments (0)