DEV Community

Ahmad Sharabati
Ahmad Sharabati

Posted on

Stop flying blind on flaky tests — pytest-cloudreport gives you HTML reports and cross-run history

If you've ever stared at a CI failure that passes locally, or watched the same test flap for the third time this week with no idea how often it's actually broken — this post is for you.

The problem

pytest's built-in output is great for a single run. But it tells you nothing across runs:

  • Which tests are consistently slow?
  • Is this failure new or has it been flapping for two weeks?
  • Did the last deploy make things better or worse?

You end up either ignoring flakiness until it becomes critical, or building custom tooling you don't have time to maintain.

What pytest-cloudreport does

It's a pytest plugin with two modes:

1. Local HTML report — zero config, zero account needed.

pip install pytest-cloudreport
pytest --cloudreport-local
Enter fullscreen mode Exit fullscreen mode

After your test run you get a self-contained cloudreport.html with a full breakdown: pass/fail/skip counts, duration per test, error output, environment info. Open it in a browser, share it with a teammate, attach it to a ticket.

2. Cloud dashboard — cross-run history, flaky-test detection, team access.

Set an API key and every run uploads automatically:

export PYTEST_CLOUD_API_KEY=your_key_here
pytest
Enter fullscreen mode Exit fullscreen mode

No --cloudreport flag needed — the presence of the key enables it. The upload runs in a background thread with a 5-second timeout, so it never slows down or stalls your CI pipeline.

Flaky test detection

This was the original reason I built it. The dashboard tracks pass/fail across runs and surfaces tests with inconsistent results — not just "it failed today" but "this test has failed 4 out of the last 20 runs."

It also integrates cleanly with pytest-rerunfailures: intermediate retries are ignored and only the final outcome is recorded, so a test that passes on the third attempt counts as one flaky event, not two failures.

CI setup (GitHub Actions example)

- name: Run tests
  env:
    PYTEST_CLOUD_API_KEY: ${{ secrets.PYTEST_CLOUD_API_KEY }}
  run: pytest --cloudreport-local
Enter fullscreen mode Exit fullscreen mode

That's it. You get both the local HTML artifact and the cloud upload in one command.

To attach the report as a CI artifact:

- uses: actions/upload-artifact@v4
  if: always()
  with:
    name: test-report
    path: cloudreport.html
Enter fullscreen mode Exit fullscreen mode

pytest-xdist support

If you run tests in parallel with pytest-xdist, the plugin automatically detects worker processes and only uploads from the controller. No configuration needed, no duplicate uploads.

Escape hatch

For teams where the cloud upload is opt-in only:

CLOUDREPORT_DISABLE=1 pytest
Enter fullscreen mode Exit fullscreen mode

This hard-disables the plugin without uninstalling it — useful if you want it enabled by default in CI but off for local dev.

Environment labelling

Tag runs by environment so you can compare CI vs staging vs production results:

# pytest.ini
[pytest]
cloudreport_environment = staging
Enter fullscreen mode Exit fullscreen mode

Or via env var: PYTEST_CLOUD_ENV=staging.

Getting started

pip install pytest-cloudreport

# Local report only
pytest --cloudreport-local

# With cross-run history (local SQLite)
pytest --cloudreport-local --accumulate

# With cloud dashboard
export PYTEST_CLOUD_API_KEY=your_key
pytest
Enter fullscreen mode Exit fullscreen mode

Free tier covers 1 project and 5,000 tests per day — enough for most projects without a credit card.

Source: github.com/ahmad212o/pytest-cloudreport
PyPI: pytest-cloudreport
Dashboard: cloudreport.dev

Feedback welcome — especially on the flaky-test heuristic and any edge cases you hit with unusual pytest setups.

Top comments (0)