DEV Community

Cover image for Stop Writing API Tests by Hand — Let Keploy Generate Them from Real Traffic
Jordan Sterchele
Jordan Sterchele

Posted on

Stop Writing API Tests by Hand — Let Keploy Generate Them from Real Traffic

How Keploy uses eBPF to capture your API calls and auto-generate tests and mocks — without changing a line of code.


Writing API tests is the thing every developer knows they should do and nobody wants to do. The setup is tedious, the mocks go stale, and the tests break every time a schema changes. By the time you’ve written tests for one endpoint, three more have shipped without coverage.

Keploy takes a different approach: instead of writing tests, you run your app and let real traffic become your test suite. Here’s how it works and how to get it running in under ten minutes.


What Keploy Actually Does

Keploy sits at the network layer using eBPF — it intercepts API calls, database queries, and service-to-service traffic at the kernel level. No SDK to install. No code to modify. No language-specific instrumentation.

When you run your app under keploy record, every real API call becomes a test case. Every database query, external API call, and service dependency gets captured as a mock. When you run keploy test, Keploy replays those calls against your app in isolation — no live dependencies needed.

Real traffic → Keploy captures → Test cases + mocks generated
keploy test → Replays in isolation → Pass/fail report
Enter fullscreen mode Exit fullscreen mode

This means your tests always reflect what your API actually does in production, not what you thought it did when you wrote the test six months ago.


Installation

# Install Keploy
curl --silent -O -L https://keploy.io/install.sh && source install.sh

# Verify
keploy --version
Enter fullscreen mode Exit fullscreen mode

Keploy requires a Linux environment (eBPF runs at the kernel layer). On Mac or Windows, use Docker:

# Run Keploy in Docker (Mac/Windows)
docker run --rm -it \
  --privileged \
  -v $(pwd):/app \
  -w /app \
  ghcr.io/keploy/keploy:latest \
  keploy --version
Enter fullscreen mode Exit fullscreen mode

Your First Recording — Node.js Example

Let’s say you have a Node.js Express API. Here’s the full flow from zero to generated tests.

Your app:

// app.js
const express = require('express');
const app = express();
app.use(express.json());

app.get('/users/:id', async (req, res) => {
  // Imagine this hits a database
  const user = await db.users.findById(req.params.id);
  res.json(user);
});

app.post('/users', async (req, res) => {
  const user = await db.users.create(req.body);
  res.status(201).json(user);
});

app.listen(3000);
Enter fullscreen mode Exit fullscreen mode

Step 1 — Start recording:

keploy record -c "node app.js"
Enter fullscreen mode Exit fullscreen mode

Your app starts normally. Keploy is now capturing all traffic in the background.

Step 2 — Make real API calls:

# Get a user
curl http://localhost:3000/users/123

# Create a user
curl -X POST http://localhost:3000/users \
  -H "Content-Type: application/json" \
  -d '{"name": "Jordan", "email": "jordan@example.com"}'
Enter fullscreen mode Exit fullscreen mode

Step 3 — Stop recording:

Press Ctrl+C. Keploy saves the captured traffic to a keploy/ directory in your project:

keploy/
  test-1.yaml    # GET /users/123 — request + expected response
  test-2.yaml    # POST /users — request + expected response
  mocks/
    mock-1.yaml  # Database call mock for test-1
    mock-2.yaml  # Database call mock for test-2
Enter fullscreen mode Exit fullscreen mode

Each test file contains the full request, the expected response, and all mocked dependencies. No test code written.


Running the Tests

keploy test -c "node app.js"
Enter fullscreen mode Exit fullscreen mode

Keploy starts your app, replays each recorded request, compares the actual response to the captured response, and reports pass/fail.

🐰 Keploy Test Run
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ test-1 (GET /users/123) — PASSED
✅ test-2 (POST /users) — PASSED

Tests: 2 passed, 0 failed
Coverage: 87% (statement), 91% (API schema)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Enter fullscreen mode Exit fullscreen mode

The database is never called during test replay — Keploy uses the captured mocks instead. Your tests run without any live dependencies.


Handling Non-Deterministic Fields

The most common issue with replay-based testing: fields that change between runs — timestamps, UUIDs, auto-incremented IDs. Keploy handles these with noise filters.

# keploy/test-1.yaml
version: api.keploy.io/v1beta1
kind: Http
name: test-1
spec:
  request:
    method: GET
    url: /users/123
  response:
    status_code: 200
    body: |
      {"id": "123", "name": "Jordan", "createdAt": "2026-05-01T10:00:00Z"}
  assertions:
    noise:
      - body.createdAt    # Ignore this field during comparison
      - header.Date       # Ignore response date header
Enter fullscreen mode Exit fullscreen mode

Add noise entries for any fields that legitimately change between runs. Everything else must match exactly.


CI/CD Integration — GitHub Actions

# .github/workflows/keploy-tests.yml
name: Keploy API Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm install

      - name: Install Keploy
        run: curl --silent -O -L https://keploy.io/install.sh && source install.sh

      - name: Run Keploy tests
        run: keploy test -c "node app.js" --delay 5
        # --delay gives your app time to start before replaying traffic
Enter fullscreen mode Exit fullscreen mode

Commit the keploy/ directory to your repo. Every PR now runs against the captured test suite. Any response that changes from what was recorded fails the check.


What Gets Mocked Automatically

Keploy captures and mocks all of these without configuration:

  • PostgreSQL, MySQL, MongoDB — database queries and responses
  • Redis — cache reads and writes
  • Kafka, RabbitMQ — message queue operations
  • External HTTP APIs — any outbound HTTP call your app makes
  • gRPC — service-to-service calls

If your app calls a payment API, a weather service, or an internal microservice during the recording, those calls are captured as mocks. Your tests run completely offline.


Common Issues and Fixes

Issue: eBPF not working on your kernel

Keploy requires kernel 5.15+ for full eBPF support. Check your version:

uname -r
# Should be 5.15 or higher
Enter fullscreen mode Exit fullscreen mode

If you’re on an older kernel or running on Mac/Windows, use the Docker approach above.

Issue: Tests pass locally but fail in CI

Usually caused by non-deterministic fields not added to noise. Run your tests locally twice — any field that changes between runs should be in the noise list.

Issue: App starts but Keploy captures nothing

Make sure your app is actually receiving traffic during the record session. Keploy only captures calls that hit the network layer — it won’t generate tests for code paths that aren’t exercised.

Issue: Database mocks not working

Keploy needs to intercept the database connection before your app connects. Make sure keploy record starts before your app tries to connect to the database.


The Production Readiness Checklist

Before relying on Keploy tests in CI:

  • [ ] Recorded test cases cover all critical API endpoints
  • [ ] Non-deterministic fields (timestamps, UUIDs) added to noise configuration
  • [ ] keploy/ directory committed to git alongside application code
  • [ ] GitHub Actions workflow configured to run keploy test on every PR
  • [ ] --delay flag set to give your app time to start before replay
  • [ ] Kernel version confirmed to be 5.15+ (or Docker fallback configured)

If you’re integrating Keploy and hitting issues — eBPF kernel compatibility, CI configuration, noise filtering for complex response shapes — drop a comment. I’ll answer.


Disclosure: This post was produced by AXIOM, an agentic developer advocacy workflow powered by Anthropic’s Claude, operated by Jordan Sterchele. Human-reviewed before publication.

Top comments (0)