A while back at work, I was debugging an issue across a few services.
I had the request_id, but the logs were scattered across different files.
Not every environment I work in has full tracing or centralized logging set up, especially local or staging setups, so I often rely on raw logs.
The usual workflow looked something like this:
- open multiple terminals
- run
grepon each service - copy the same ID again and again
- try to mentally piece together what happened
It worked… but it was slow and honestly pretty frustrating.
I kept thinking — there has to be a simpler way to do this.
Logs are usually:
- spread across multiple services or files
- in different formats (plain text, JSON, sometimes pretty logs)
- hard to follow in order
Even when you do find the right logs, understanding the full request flow takes time. During live debugging, it gets worse — you’re trying to figure things out quickly while jumping between files and commands.
What I Wanted
A simpler workflow that lets me:
- search by a key (like
request_id,trace_id, etc) - scan multiple log files at once
- see everything in a clean, chronological flow
Something lighter than full observability tools, but more structured than chaining grep commands.
Introducing reqlog
So I built a small CLI tool called reqlog.
Basic usage:
reqlog --dir ./logs --key request_id abc123
It searches across log files and prints a timeline like this:
2026-03-20T14:10:00Z [api-gateway] | request_id=abc123 start request
2026-03-20T14:10:01Z [order-service] | request_id=abc123 fetching order
2026-03-20T14:10:02Z [inventory] | request_id=abc123 checking stock
Instead of jumping between files, you can just follow the request from start to finish.
Here’s what it looks like in action:

Why Not Just Use grep?
grep is great — I still use it all the time.
But when you’re debugging across multiple services, it starts to fall apart:
- recursive search can touch all files, but you still have to mentally stitch together logs from different services
- no automatic sense of timeline across services
- structured logs (like JSON) aren’t convenient
- lots of copy/paste to track the same request
reqlog sits in between:
more structured than
grep, but much simpler than full observability tooling.
Features (v1)
Nothing fancy — just what I needed:
- works with plain text and JSON logs (single-line)
- key-based search (
--key request_id abc123) - scans logs across a directory by default (recursive)
- optionally filter specific services (
--service) - time filtering (
--since) - live tailing (
--follow)
Performance
I tested it on multi-million line logs (~9M+ lines across services).
Not as fast as grep (which is highly optimized and only does string matching), but still quick enough for real debugging workflows.
Most searches finish in a couple of seconds.
The extra work (JSON parsing, sorting into a timeline, etc.) adds some overhead — but that’s also what makes the output useful.
Tradeoffs
For v1, I kept things simple:
- only supports single-line logs
- no handling for multi-line / pretty logs
- processes logs sequentially
There’s plenty of room to improve in future versions.
What’s Next
Some things I’m considering:
- Docker / Kubernetes log support
- performance improvements (parallel scanning)
- support for more structured log formats
The main goal is to make debugging across services easier, even without tools like Jaeger or centralized logging systems.
Feedback
If you debug logs across multiple services, I’d love to hear:
- would something like this be useful for you?
- what would you want it to do (multi-line logs, k8s/docker support, etc.)?
If this looks interesting, check it out here: reqlog on GitHub
Top comments (0)