In my previous job, I was tasked with creating the test automation for a SaaS startup, from scratch. It was a 10-person team working in a share-office. Over the next four years it would double in size each year and then got acquired by a fortune-100 company. The test automation story went from zero to a full test automation platform, with a test results database integrated with JIRA, automated test runners that auto-scale, and an interface where any developer could run their tests on demand, on any branch. This was more than a job. It was a passion project and I'm grateful for that experience.
I am proud of what I achieved in those four years, initially solo and later with a small test automation team that was eventually folded into the dev team as demand for tests increased. We were running 30,000 tests per month, and yet the automation infrastructure was still just getting started.
Two features in particular taught me what developers actually want from test tooling.
The first was the screenshot comparison tool. The MVP was written in two days and yet it became one of our most valuable features. A single endpoint for uploading a screenshot, triggered in tests by a single function call. Each screenshot was given a code in the test. The first upload with that code became the reference image. Subsequent uploads would be compared to the reference and given a score, as well as a new image generated showing the diff in two different shades. Later I built an entire UI around it including pan-and-zoom so you could compare the two images, approve/reject, or leave comments for that image code.
The second was the JIRA integration. Knowing when a test failed for the same reason that had already been captured in a ticket was invaluable. And if there wasn't a ticket, you could click a button and it would open the "create ticket" flow in JIRA with the description already pre-populated with the error message, some useful context, and a link to the original test result. Subsequent failures of the same type would search for the error message and then show that ticket against the test result. You could also mark it as a "known failure", so you would know not to waste time diagnosing it again. If the same test then failed for any other reason, you'd know it was a new failure mode.
What both of these had in common was a simple idea: test results are data, and they deserve better than a CI log.
Automated Future
Today, I'm taking what I learned from those four years and building my own platform. It's called Automated Future, and the core idea is straightforward: CI is for builds. AF is for tests.
Most projects start out with CI running your tests. That's fine for a short while but it doesn't scale. Your results end up buried in logs that expire, terminal output that's already gone, or artifacts scattered across half a dozen services. There's no single place to see your test health over time. No trends. No search. No history.
Automated Future gives your test results a dedicated home.
What it does today
AF is a test results platform with a dashboard, a CLI, and a public API, accessible via API Keys. Invite your team and set up projects for each of your apps. It works with any test framework that produces JUnit XML output, which covers Jest, Vitest, pytest, JUnit, RSpec, Go test, NUnit, xUnit, and more. It integrates into any CI pipeline — GitHub Actions, GitLab CI, Jenkins, whatever you use.
The fastest way to get results into AF is with the CLI. Install it, then wrap your existing test command:
af run -- npm test
That single command creates a test run, executes your tests, auto-discovers the JUnit XML files, parses and uploads the results, and exits with your test command's exit code so your pipeline still fails when it should. You don't change how your tests work. You just prefix the command.
Once results are flowing in, the dashboard gives you a real view of your test health:
- Pass/fail charts and trend graphs over time, so you can see at a glance whether things are getting better or worse
- A searchable, sortable history of every test run and result across all your projects
- A custom query language for filtering your data — JIRA users will find the syntax familiar (
status = Failed AND duration > 5000,started_at > -7d) - Artifact storage with in-browser previews — upload screenshots, videos, logs, or JSON to any test result and view them directly in the dashboard with syntax highlighting, image zoom, and video playback
Where it's going
The features I described earlier — the screenshot comparison, the JIRA integration, the known failure tracking — those are all on the roadmap. But rather than building them in an opinionated way, I want to build them with input from developers who are actually using the platform. The problems I solved at my previous company aren't unique. Every team doing test automation hits the same pain points. I'd rather build solutions shaped by real feedback than assumptions.
Try it out
Automated Future is live and free to get started with. You can be up and running in about five minutes:
- Sign up at dashboard.automatedfuture.co
- Create a project and grab your API key
- Install the CLI in your pipeline
- Wrap your test command with
af run - Open the dashboard and see your results
The documentation has step-by-step guides for every major CI system and test framework.
If you're interested in helping to shape where this goes, I'd love to hear from you. Try it out, tell me what's missing, tell me what matters to you. Let's build a better home for test results together.

Top comments (0)