DEV Community

Ilya Ploskovitov
Ilya Ploskovitov

Posted on

My AI Bot Argues With My Swagger Schema (And Why That's Good)

In my last posts, I talked a lot about UI tests. But the real meat (and the real pain) of automation often lies with the API.

API tests need to be fast, stable, and cover 100% of your endpoints. "Simple," you say. "Just take the Swagger schema and run requests against it."

Oh, if only it were that simple.

When I started adding API test automation to Debuggo, I realized the whole process is a series of traps. Here is how I'm solving them.

Step 1: Parsing the Schema (The Deceptively Easy Start)

It all starts simply. I implemented a feature:

  1. You upload a Swagger schema (only Swagger for now).
  2. Debuggo parses it and automatically creates dozens of test cases:
    • [Positive] For every endpoint.
    • [Negative] For every required field.
    • [Negative] For every data type (field validation).

This already saves hours of manual work "dreaming up" negative scenarios. After this, you can pick any generated test case (e.g., [Negative] Create User with invalid email) and ask Debuggo: "Generate the steps for this."

Step 2: Creating Steps (The First Challenge: "Smart Placeholders")

...the first real problem begins. How does an AI know what a "bad email" is?

The Bad Solution: Hardcoding the knowledge that bad-email@test.com is a bad email into the AI. This is brittle and stupid.

The Debuggo Solution: Smart Placeholders.

When Debuggo generates steps for a negative test, it doesn't insert a value. It inserts a placeholder.

For example, for a POST /users with an invalid email, it will generate a step with this body:

{"name": "test-user", "email": "%invalid_email_format%"}
Enter fullscreen mode Exit fullscreen mode

Then, at the moment of execution, Debuggo itself (not the AI) expands this placeholder into real, generated data that is 100% invalid. The same goes for dropdowns, selects, etc. — the AI doesn't guess the selector, it inserts a placeholder, and Debuggo handles it.

Step 3: The First Run (The Second Challenge: "The Schema Lies")
So, we have our steps with placeholders. We run the test. And it fails.

The Scenario: The schema says POST /users returns 200 OK. The application actually returned 201 Created.

A traditional auto-test: Will just fail, giving you a "flaky"
test.

The Debuggo Solution: A Dialogue with the User.

Debuggo sees the conflict: "Expected 200 from the schema, but got 201 from the app."

It doesn't just fail. It pauses the test and asks you:

"Hey, the schema and the real response don't match. Do you want to accept 201 as the correct response for this test?"

You, the user, confirm. Debuggo fixes the test case. You just fixed a brittle test without writing a single line of code.

Step 4: Adaptation (The Third Challenge: "Secret" Business Rules)
This is the coolest feature I've implemented.

The Scenario: The app returns a 400 Bad Request with the response body: {"error": "name cannot contain spaces"}.

A traditional auto-test: Will fail, and you have to manually analyze the logs to find the hidden rule.

The Debuggo Solution: Adaptation on the Fly.

Debuggo doesn't just see the 400 error. It reads the response body and sees the rule: "name cannot contain spaces."

It automatically changes the placeholder for this field. It creates a new one — %string_without_spaces% — and re-runs the test by itself with the new, correct value.

The AI is learning the real business rules of your app, even if they aren't documented in Swagger.

What's the takeaway?
I'm not just building a "Swagger parser." I'm building an assistant that:
* Generates hundreds of positive/negative test cases.
* Uses "Smart Placeholders" instead of hardcoded values.
* Identifies conflicts between the schema and reality and helps you fix them.
* Learns from the application's errors to make tests smarter.

This is a hellishly complex thing to implement, and I'm sure it's still raw.

That's why I need your help. If you have a "dirty," "old," or "incomplete" Swagger schema—you are my perfect beta tester.

Top comments (1)

Collapse
 
grantwakes profile image
Grant Wakes

Love this angle of making the bot argue with the schema instead of blindly trusting it—turning “flaky tests” into a feedback loop is such a smart move. The smart placeholders + on-the-fly adaptation from error messages feels like how API testing should have worked all along.