A local requestbin in 200 lines of PHP — why self-hosting beats webhook.site for production debugging
webhook.siteandrequestbin.comare great for hello-world webhook demos. They are not great for debugging outgoing webhook integrations on a service that handles customer data.webhook-inspectoris a ~200-line PHP Slim 4 app that gives you the same "paste a URL, watch requests land" workflow, running locally, with no rate limit, no TTL, and no third party ever seeing your traffic.
📦 GitHub: https://github.com/sen-ltd/webhook-inspector
I hit the same problem three times in a month and finally got annoyed enough to build a tool for it. Each time: a service somewhere — GitHub Actions runner, a Shopify app, an internal billing cron — was supposed to be POSTing a webhook to another one of our services, and one of three things was happening.
- Nothing was arriving at all.
- Something was arriving but the signature verification was failing.
- The payload shape was subtly wrong in a way that only the receiving code knew about.
In every case the fastest way to move forward is to take the receiver out of the picture and point the sender at a request inspector — a URL that captures every request sent to it and lets you look at the exact bytes that came in. Paste that URL into the webhook producer's settings, push the button that fires the event, and read what it actually sent.
The hosted services that solve this (webhook.site, requestbin.com, beeceptor) are fine for exactly one category of use: you don't care who sees the traffic. For anything touching real customers, real API keys, real signing secrets, you don't want that data routed through somebody else's logging pipeline. And even when privacy isn't a concern, the free tier tends to rate-limit, expire URLs after 24–48 h, or silently drop requests past some per-bin cap.
So I built a local one. This article is the field report. The design is deliberately tiny — one Slim 4 app, one in-memory repository, ~200 lines of actual code — so the whole thing fits in your head in one sitting.
The problem webhook.site actually solves (and its honest tradeoffs)
It's worth being specific about what a request inspector actually does, because once you name the mechanism the implementation writes itself.
A request inspector is three things at once:
- An HTTP sink. Accept any method, any path under a known prefix, any headers, any body, and return 200. Never reject.
-
A structured store. Record each incoming request as
{method, path, headers, body, query, client_ip, received_at, bytes}so you can inspect it later. - A listing UI. Show the stored requests, newest first, with a way to clear them.
That's genuinely it. Everything else — account systems, retention policies, email forwarding, live WebSocket streams, Slack integrations — is value-add on top of those three things.
The hosted versions have to solve problems that a self-hosted version does not:
-
Multi-tenancy. They need to prevent you from seeing somebody else's bin. I don't:
localhosthas exactly one tenant. - DDoS. They need to rate-limit and cap payload sizes aggressively. I don't: if I overload my own laptop it's my own problem.
- Durability. They need to keep your bin alive across their own deploys. I don't: if I restart the container, I want it to lose everything. I'm debugging a specific replay right now.
-
A domain name. They need a stable, shareable URL. I don't: my webhook producer is either also running locally (and can talk to
localhost:8000) or is hitting anngroktunnel that happens to terminate at my local inspector.
Every one of those "don't need" items is a design simplification I can take advantage of.
The Slim 4 wildcard-method route
The core of the app is a single route that accepts any common method on /bin/{slug} and stores it. Slim 4 doesn't have a "match any method" sugar, but it has map(), which takes a method list — and that's actually better, because you get an explicit route map in your logs instead of a mystery catch-all.
use Psr\Http\Message\ResponseInterface;
use Psr\Http\Message\ServerRequestInterface;
$capture = function (
ServerRequestInterface $request,
ResponseInterface $response,
array $args,
) use ($json, $maxBodyBytes, $validSlug) {
$slug = (string) $args['slug'];
if (!$validSlug($slug)) {
return $json($response, ['error' => 'bad_slug'], 400);
}
$body = (string) $request->getBody();
if (strlen($body) > $maxBodyBytes) {
return $json($response, ['error' => 'too_large', 'limit_bytes' => $maxBodyBytes], 413);
}
$snap = RequestCapture::snapshot($request, $slug);
$id = webhook_inspector_repo()->append($slug, $snap);
return $json($response, ['received' => true, 'slug' => $slug, 'id' => $id]);
};
$app->map(
['GET', 'POST', 'PUT', 'PATCH', 'DELETE'],
'/bin/{slug}',
$capture,
);
Two things worth calling out.
First, the slug regex ([A-Za-z0-9_-]{1,64}) is applied in the handler, not in a Slim route constraint. I tried the route-constraint form first and it felt heavier for what it buys. Keeping the check in-handler means I also return a structured JSON error ({"error":"bad_slug"}) instead of a 404 with Slim's default HTML body, which is much friendlier when you're debugging with curl and jq.
Second, I deliberately list GET POST PUT PATCH DELETE instead of doing array_fill with every HTTP verb. This route exists to catch webhooks, and webhooks use those five. Supporting TRACE or CONNECT would add complexity without solving any real user's problem. YAGNI.
The circular buffer
Once a request arrives, I need to store it somewhere. The natural data structure is a per-slug FIFO queue with a hard cap: keep the most recent N requests, drop the oldest when a new one arrives, never let any one bin grow without bound. This is a textbook circular buffer, and in PHP it's satisfying to write:
public function append(string $slug, array $captured): string
{
$id = $this->nextId();
$captured['id'] = $id;
if (!isset($this->bins[$slug])) {
$this->bins[$slug] = [];
}
$this->bins[$slug][] = $captured;
// Ring buffer: drop the oldest once we exceed the cap.
while (count($this->bins[$slug]) > $this->maxPerBin) {
array_shift($this->bins[$slug]);
}
return $id;
}
Yes, array_shift in a loop is O(n) per overflow. For maxPerBin = 100 and webhook rates measured in single-digits-per-second, that is irrelevant. I resisted the temptation to reach for SplQueue because the foreach-friendly plain array is worth more to me during debugging than the constant factor. The whole point of this tool is that I'm going to read through every captured request by eye.
Per-slug isolation is just a hash map keyed by slug. Clearing a bin is unset($this->bins[$slug]). Listing is array_reverse() plus an optional array_slice() for ?limit=. That's the entire repository.
One catch: php -S spawns a process per request
There is a subtle gotcha that bit me during my first smoke test. PHP's built-in dev server (php -S) is single-threaded and handles each request in a fresh worker. A static-variable "singleton" inside a function therefore doesn't persist across requests. I POSTed, got a 200, listed the bin, got an empty array. Very funny.
The fix is boring but important: back the repository with a JSON file under flock so different request-handling processes share state. On Docker container start I rm -f that file, so every docker run still behaves like a pure in-memory buffer from the outside:
CMD ["sh", "-c", "rm -f /tmp/webhook-inspector-state.json \
&& exec php -S 0.0.0.0:8000 -t public public/index.php"]
In tests I pass null for the persist path, so BinRepository stays pure in-memory and the whole PHPUnit suite runs in 50 ms with zero filesystem touching.
Binary bodies
Webhook payloads are usually JSON, which is nice. But "usually" is not "always" — a surprising fraction of real webhooks are application/x-www-form-urlencoded, multipart/form-data with a binary attachment, or occasionally a gzipped protobuf. I want the inspector to display text bodies as text (so I can actually read them) but safely round-trip binary bodies through JSON too.
The rule is: if the bytes are valid UTF-8 and don't contain embedded NULs or other stray control characters, store as text. Otherwise, base64.
public static function encodeBody(string $body): array
{
if ($body === '') {
return ['body' => '', 'encoding' => 'utf-8'];
}
if (self::isUtf8Text($body)) {
return ['body' => $body, 'encoding' => 'utf-8'];
}
return ['body' => base64_encode($body), 'encoding' => 'base64'];
}
public static function isUtf8Text(string $s): bool
{
if (!mb_check_encoding($s, 'UTF-8')) {
return false;
}
// Allow tab/LF/CR + printable ASCII + any non-ASCII codepoint.
// Anything else (raw \x00–\x08, \x0B–\x0C, \x0E–\x1F) marks it as binary.
$stripped = preg_replace('/[\x09\x0A\x0D\x20-\x7E]|[^\x00-\x7F]/u', '', $s);
return $stripped === '';
}
The second function had an embarrassing first draft. I wrote [\x{00A0}-\x{FFFF}] as the "non-ASCII" range and it worked for Japanese text (こんにちは is U+3053…) but the testEncodeBodyKeepsNonAsciiUtf8 test failed on the waving-hand emoji 👋 — because it's U+1F44B, well above U+FFFF. The test caught it before I shipped, which is the entire reason you write the test first with a representative sample.
The final rule — "anything non-ASCII is allowed, anything ASCII-non-printable is binary" — is simpler than my first attempt and strictly more correct. That happens more often than you'd think.
Tradeoffs I'm living with
Calling these tradeoffs is more honest than calling them "limitations":
- No persistence. Restart = clean slate. This is a feature for a debugging tool; it would be a bug in a production logger. If you need durability, this is not the tool.
-
No authentication. Bins are namespaced by slug, but any client that knows the slug can read the bin. Fine for
localhost. If you expose the port publicly you must put an authenticating reverse proxy in front of it. -
No WebSocket live stream. The dashboard polls
/bin/:slug/requestsevery 2 seconds. A WebSocket would be prettier but would roughly triple the server code. 2 seconds is indistinguishable from "live" when you're reading payloads by eye. -
No per-bin retention policy beyond the ring buffer.
MAX_REQUESTS_PER_BINcaps memory usage. If you need time-based expiry, this is not the tool. -
Memory-bound. With defaults (100 per bin, 5 MB max body), worst-case memory per active bin is ~500 MB. Tune
MAX_BODY_MBdown if you have lots of bins. - No multi-user isolation. There is exactly one tenant: you.
Every one of these is a line I could write and wouldn't need any new dependencies for. The reason none of them are in v1 is that they would make the tool less usable for the actual job I built it for.
Try it in 30 seconds
git clone https://github.com/sen-ltd/webhook-inspector
cd webhook-inspector
docker build -t webhook-inspector .
docker run --rm -p 8000:8000 webhook-inspector
In another terminal:
# Fire a webhook at it
curl -X POST http://localhost:8000/bin/test \
-H "Content-Type: application/json" \
-H "X-GitHub-Event: push" \
-d '{"event":"push"}'
# {"received":true,"slug":"test","id":"r_..."}
# Read what was captured
curl -s http://localhost:8000/bin/test/requests | jq
# Clear it
curl -X DELETE http://localhost:8000/bin/test/requests
Or point your browser at http://localhost:8000/ for the live-polling dashboard. Pick any slug, click "Open", and watch requests land as your webhook producer fires them.
Tests
The repo ships with 35 PHPUnit tests covering the three modules — the repository (ring buffer bounds, per-slug isolation, find-by-id, stats), the capture (UTF-8 vs. binary, X-Forwarded-For handling, header flattening), and every HTTP route end-to-end. The suite runs in 50 ms because every test uses the pure in-memory mode of BinRepository — no temp files, no Docker, no network. That matters: I wanted this whole thing to feel light enough that tests don't talk me out of rerunning them on every save.
The multi-stage Alpine Dockerfile is the same pattern I use for every other small PHP service: builder stage runs Composer, strips tests/ and docs/ out of vendor/, then the runtime stage is plain alpine:3.19 with PHP copied across. Final image: 52 MB, runs as a non-root user.
What I'd add for v2
A retention-by-time option (?maxAgeMinutes=60) would cover the one use case the ring buffer doesn't: "show me everything from the last hour, regardless of volume". A signed-request verifier (webhook-inspector verify --secret=... --scheme=stripe) would close the loop between "see the exact bytes" and "prove the signature is right". Both are small additions that would keep the tool's character intact. Neither is in v1, because v1 is for the job I actually have today.
The whole idea is that you can read the source of a tool like this in one sitting. I'd rather have 200 lines of code that exactly solves my current problem than 2000 lines that also solves three problems I might have later.

Top comments (0)