Like most teams using GitHub Actions, we’d gotten used to the ritual: push code, wait for CI, see a red build, re-run it, hope it passes this time. “It’s probably flaky” became the default response to any test failure — including real ones.
We decided to actually measure the damage. Over 30 days on a single repo:
- 842 CI runs → 117 failures (13.9% failure rate)
- 31.5 developer hours spent investigating and re-running
- $426 in CI compute burned on re-runs that shouldn’t have been needed
- 1 regression shipped to production because a real failure was dismissed as “just flaky”
The worst part? Nobody could tell us which tests were flaky. We had a vague sense — “that login test is weird” — but no actual inventory. And without an inventory, you can’t fix what you can’t see.
So we built Retestees — a tool that connects to your GitHub Actions in 2 minutes and gives you a CI Waste Report showing:
- Which tests fail repeatedly without code changes (true flaky tests)
- How much developer time and CI cost each one wastes
- Which workflows are the least stable, ranked by failure rate
- A clear priority list so you fix the most expensive flaky tests first
No code changes. No config files. No test framework plugins. You connect your repo, and we analyze your existing CI history.
We’re looking for teams to try it free. If you’re on GitHub Actions and tired of re-running builds, we’ll generate a free CI Waste Report for your repo — no credit card, no commitment. You’ll see exactly where your CI time and money are going.
(We also have a $29/mo beta plan if you want ongoing monitoring and alerts. Details on the site.)
Would love feedback from anyone dealing with this. What’s your worst flaky test story?
Top comments (0)