imnot is an open source stateful API mock server. This is the story of why I built it.
The ticket that changes your afternoon
A support ticket arrives: "For this specific transaction, the integration fails with a null pointer exception."
The data that triggered the bug is in production. The exact combination of field values exists only in that one real record.
The right move is to reproduce it in your staging environment. But rebuilding that exact record manually in the external system's demo UI — matching every field value — can take hours. Sometimes it's practically impossible because the external system's demo environment doesn't support all the same configurations as production.
What you actually want: take the exact production payload from the support ticket, upload it to a mock that returns it verbatim, point your staging system there, and reproduce the failure in minutes. No manual reconstruction. No touching production.
That's one of the core use cases that motivated imnot. I work as a Lead Integration Solutions Engineer at a Revenue Management System in the hospitality industry, where integrating with external partners — property management systems, booking platforms, channel managers — is daily work.
The NiFi workaround and its ceiling
I've been using Apache NiFi for integration workflows for about six years. NiFi is a data flow orchestration tool — not designed for mocking, but flexible enough that you can build almost anything with it.
Over time I built a collection of NiFi flows that simulated external system behavior. The pattern: upload a payload via HTTP, configure your application to point at the NiFi URL, and the flow responds exactly like the real external system would — including the full async sequences that some systems require. We used it to test integration changes without needing a live external environment, and to reproduce production bugs from support tickets without touching real data.
The reason we used NiFi for this — rather than Postman mock servers or Mockoon — wasn't because NiFi is better at mocking. It was simply already there. We were using it for integration workflows, so when the need for mock endpoints arose, it was the natural tool to reach for.
But it had a hard ceiling.
Every new mock required specialist knowledge of NiFi. Building one took meaningful time, and when speed was the priority, quality suffered. The team has grown and we now have more people working with NiFi, but the underlying problem remained: the mock configuration lived inside NiFi flows, which meant it wasn't version-controlled alongside the integration code it was testing, and it wasn't accessible to anyone outside that specialist circle.
When AI coding tools became widely available across our team, something clicked. People who weren't developers were suddenly building things — generating configs, automating tasks that previously required specialist knowledge. I thought: what if anyone could describe an external API and have a working mock in minutes, without knowing NiFi, without depending on a specialist?
That was the seed of imnot.
What makes it different: stateful flows in YAML
Most mock servers handle the stateless case well — define a response for a given endpoint, return it every time. That covers a lot of ground, but it doesn't cover the patterns that appear constantly in B2B integrations.
Consider a common async flow: your system POSTs a request to an external API, receives a 202 Accepted with a location reference, polls that location until the external system reports completion, then fetches the result. Three steps, each dependent on the previous one. The identifier generated in step one appears in the path of steps two and three. Call them out of order, and the real API rejects you.
WireMock and Mockoon are excellent tools, but modeling this sequence declaratively — without writing code — isn't what they're built for. imnot is built specifically for this:
- name: data-sync
pattern: async
endpoints:
- step: 1
method: POST
path: /external/jobs
response:
status: 202
generates_id: true
id_header: Location
id_header_value: /external/jobs/{id}
- step: 2
method: HEAD
path: /external/jobs/{id}
response:
status: 201
headers:
Status: COMPLETED
- step: 3
method: GET
path: /external/jobs/{id}
response:
status: 200
returns_payload: true
imnot start reads that YAML and registers the endpoints dynamically. Point your staging system there. The mock handles the sequence, the state, and the ID propagation automatically.
For the support ticket scenario: upload the exact production payload via a single API call, point your staging system at imnot, and the integration processes it exactly as it would in production — in a safe, controlled environment.
AI-ready by design
The YAML schema is intentionally simple enough that Claude, ChatGPT, or Copilot can generate a valid partner definition from a plain description or an OpenAPI spec. The README ships with ready-to-use prompts for both cases.
On my team, people who've never written YAML are already using imnot: describe what the external API does, paste the output into imnot generate, and have a working mock running. No NiFi knowledge required.
This felt like the right design decision — and it also felt honest, because imnot itself was built with Claude Code as the primary coding tool. Using AI to build a tool designed to work well with AI seemed appropriately coherent.
Running in production — local and cloud
imnot runs anywhere Docker runs. For local development, three commands are all you need:
pipx install imnot
imnot init # scaffolds partners/ with working examples
imnot start
Once the server is running, imnot routes lists all registered endpoints without restarting.
For teams who want a shared instance, it deploys as a container on any cloud platform. In our case it runs in the same EKS cluster as our NiFi deployment, with its own Helm chart. Every member of the integrations team can upload payloads, reproduce bugs, and run tests against it — no local setup required, no NiFi knowledge needed.
The only infrastructure requirements: a persistent volume at /app/data for the SQLite session store, an IMNOT_ADMIN_KEY environment variable to protect the admin endpoints, and --host 0.0.0.0 so the container port is reachable from outside.
docker compose up
Built with
- FastAPI — HTTP server and dynamic route registration
- SQLite — session and payload persistence, zero infrastructure
- PyYAML — partner definition parsing
-
Click — CLI (
imnot init,imnot start,imnot routes,imnot generate) - Uvicorn — ASGI server
Try it
pipx install imnot
imnot init
imnot start
The repo includes two example partner definitions — StayLink and BookingCo — demonstrating the main patterns. partners/README.md has the full YAML schema reference.
Once your partners are defined, imnot export postman generates a Postman collection v2.1 covering all consumer and admin endpoints — useful for manual testing and sharing with QA without having to document endpoints by hand.
If you work on integrations and recognize any of this — the missing staging environments, the production payload debugging, the specialist everyone depends on to build the mocks — imnot was built for that situation.
Top comments (0)