“Regression testing without tests” sounds like marketing nonsense. If someone told me that a year ago, I’d probably roll my eyes and move on. So let’s unpack what this actually means, using a real workflow.
Rentgen introduced what I call regression out of the box. You don’t need a predefined test suite. You don’t need an OpenAPI file. You don’t need CI configured. You don’t even need existing assertions. All you need is one working request.
The idea is conceptually similar to how Open Diffy at Twitter compared responses across environments. But here you don’t need traffic mirroring or complex infrastructure. You import a real cURL request — ideally something you already know works in production — and run it in Rentgen.
If the request is successful, you map the fields. That means you tell Rentgen what each field represents: an ID, a timestamp, a numeric value, an enum, something dynamic, something stable. This step is critical because it gives context to the engine. After that, you press “Generate and Run”.
At this point, Rentgen automatically generates dozens — sometimes hundreds — of structured tests from that single request. It mutates inputs, checks structural consistency, validates error handling, and evaluates response behavior. You didn’t write a single test case manually. There’s no collection to maintain. No assertion scripts.
When the run finishes, you press “Select for Compare”.
Now you switch the environment. For example, from PROD to TEST.
You send the exact same request again, map the fields again (because different environments may return slightly different values), and press “Generate and Run” once more.
After the second run is complete, you click “Compare with Selected”.
This is where things get interesting.
If everything behaves consistently, you get a clean green result.
But if something changed — and it shouldn’t have — Rentgen highlights it clearly. Not as raw JSON noise. Not as a giant diff dump. It shows what changed, where it changed, and why it might be a bug.
Traditional automated regression requires predefined expectations. You have to decide up front what is correct. But in real life, especially early in a project or during migration, you often don’t have that luxury. You just want to know: did something drift between environments? Did the new deployment introduce structural inconsistency? Did error handling degrade?
That’s what I mean by Automation Before Automation.
This does not replace your test suite. It doesn’t compete with Postman, Playwright, or your CI pipeline. It sits one layer earlier. It answers a diagnostic question before you invest time in writing and maintaining formal tests.
You can still inspect full drift if you want. There’s a toggle to see all differences, including raw structural changes. Nothing is hidden. But by default, Rentgen filters noise and surfaces potential issues that matter.
The important part is this: you can detect API bugs even when you have zero tests written.
No framework setup. No CI integration. No collection maintenance.
Just one real request.
That’s regression out of the box.
Find out more: https://rentgen.io


Top comments (1)
"Regression testing without tests" sounds like it shouldn't work, but the approach is actually clever — use a known-good response as your baseline and diff against it. It's the same principle as golden file testing, just applied to APIs.
This is especially relevant in an AI-assisted coding world. If AI is generating or modifying API endpoints, you need a verification layer that doesn't depend on someone having written tests first. Most real codebases have gaps in test coverage. A tool that can catch regressions from the existing behavior — even without explicit assertions — fills a real gap.
The "one working request" starting point is smart because it lowers the bar to actually using it. The biggest reason teams don't test APIs is the setup cost. If you can skip that and still catch regressions, the cost-benefit math changes completely.