DEV Community

Khabib
Khabib

Posted on

YATL — API testing in a new way

Today I want to share a fresh look at API testing automation. I think that writing integration tests for modern APIs has become unnecessarily cumbersome. Even when using simple tools, we are drowning in boilerplate code: session setup, exception handling, serialization/deserialization. My idea is to turn the tests into pure data, making them declarative and understandable to any member of the team. If you’ve ever done API testing (especially in microservice architecture), you’ve probably come across this familiar list of problems:

  1. You need to write code — even for the simplest GET request, you have to import libraries, configure the client, and catch timeouts.

  2. High entry threshold — to write a test, you need to know Python (Java, Go, JS) at a level sufficient for debugging asynchronous calls and working with JSON schemas.

  3. Complex dependencies — when one request uses data from another’s response (for example, POST /login → token, then GET/users/me), the code turns into a tangle of callbacks or async/await.

  4. It’s hard to maintain — after a couple of months, the test scripts become “noodles” that even the authors are afraid to touch.

And most importantly, not everyone in the team is a developer. QA engineers, analysts, technical staff, and sometimes product managers want to test the API, but they don’t want (and shouldn’t) learn all the intricacies of imperative programming.

YATL (Yet Another testing language)

YATL is a DSL (domain-specific language) written in Python for API testing. It uses YAML as the test description language. But this is not just another framework for developers, but a tool that democratizes API testing, making it accessible to the entire team.

If you know HTTP and YAML, you know YATL.

Instead of writing imperative code, you declaratively describe the tests in YAML files. Let’s take a look at what the simplest test for verifying a GET request looks like.:

name: ping
base_url: google.com

steps:
- name: access_test
  request:
    method: GET
  expect:
    status: 200

- name: failed_test
  request:
    method: GET
    url: /not_found
  expect:
    status: 404
Enter fullscreen mode Exit fullscreen mode

There is no hidden “magic”: only the request (method, URL) and the expected response (status, partial body check). This is much closer to the specification than to the script. Each file has the .test extension.yaml is a test script consisting of several steps.
And here’s what this test would look like in Java:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.io.IOException;

public class HttpTest {

    public static void main(String[] args) {
        try {
            test();
        } catch (IOException | InterruptedException e) {
            e.printStackTrace();
        }
    }

    public static void test() throws IOException, InterruptedException {
        HttpClient client = HttpClient.newHttpClient();

        HttpRequest request = HttpRequest.newBuilder()
                .uri(URI.create("https://google.com"))
                .GET()
                .build();

        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
        assert response.statusCode() == 200 : "Expected 200, got " + response.statusCode();

        HttpRequest notFoundRequest = HttpRequest.newBuilder()
                .uri(URI.create("https://google.com/not_found"))
                .GET()
                .build();

        HttpResponse<String> notFoundResponse = client.send(notFoundRequest, HttpResponse.BodyHandlers.ofString());
        assert notFoundResponse.statusCode() == 404 : "Expected 404, got " + notFoundResponse.statusCode();

        System.out.println("All tests passed.");
    }
}
Enter fullscreen mode Exit fullscreen mode

You must agree that the first option looks more friendly and requires significantly less preparatory actions.

Key features

Now let’s move on to the key features and features of the project.:
Support for all data formats. YATL understands most of the content-types that you’ll encounter in real life out of the box.:

  1. JSON — automatically sets the Content-Type: application/json
  2. XML — for SOAP and other XML APIs
  3. Form-data — application/x-www-form-urlencoded
  4. Multipart files — uploading files
  5. Plain text — text/plain

Example with JSON:

request:
  method: POST
  url: /users
  body:
    json:
      name: "John Doe"
      email: "doe@example.com"
Enter fullscreen mode Exit fullscreen mode

Data extraction and templating

You can extract any fragments from the responses and use them in subsequent queries through powerful Jinja2 templates. Dot notation is used to access nested JSON fields (for example, user.info.name ), which makes the syntax clean and concise: Imagine a chain:

steps:
  - name: user_creation
    request:
      method: POST
      url: /users
      body:
        json:
          name: "Alice"
    expect:
      status: 200
    extract:
      user_id: "response.id"  # Extracting the ID from the response

  - name: gettin_user
    request:
      method: GET
      url: /users/{{ user_id }}   # Using the extracted ID
    expect:
      status: 200
      json:
        user.info.name: "Alice" # name verification, using dot notation
Enter fullscreen mode Exit fullscreen mode

Parallel execution

YATL runs tests in 10 threads by default (the value can be configured via —-workers). This dramatically speeds up the run of large test suites, for example, for regression.

Skipping tests and steps

In real-world development, you often need to temporarily disable a test without deleting it. YATL supports the skip mechanism:

name: test in development
skip: true  # test will be skipped
Enter fullscreen mode Exit fullscreen mode

You can also skip a separate step:

steps:
  - name: active_step
    request: ...

  - name: skipped_step
    skip: true  # Only this step is skipped
    request: ...
Enter fullscreen mode Exit fullscreen mode

Data validation

name: Extended Validation Example
steps:
  - name: user_data_test
    request:
      method: GET
      url: /api/users/123
    expect:
      status: 200 
      validate:
        - compare: { path: "user.age", gt: 18 }         
        - compare: { path: "user.name", min_length: 3 }     
        - compare: { path: "user.email", regex: ".+@.+\\..+" } 
        - compare: { path: "items", type: "array", not_empty: true } 
Enter fullscreen mode Exit fullscreen mode

CI/CD integration

name: API Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: '3.14'
      - run: pip install yatl
      - run: yatl tests/ --workers 5
Enter fullscreen mode Exit fullscreen mode

You can’t use the library yet (it hasn’t been published in PyPI yet), but you can give the project an asterisk on GitHub and bookmark it today: https://github.com/Khabib73/YATL

This is the best way to say “thank you” and help us make the release faster. We plan to release the first stable version in 3–4 weeks.

YATL, on the other hand, offers a universal language for describing tests that is understandable to everyone on the team. An analyst can add a check for a new endpoint without pulling the developer. QA engineer — reuse variables and schemas. And the developer can quickly debug a crash test just by looking at YAML.

Top comments (0)