Hello, I'm Maneshwar. I'm working on FreeDevTools online currently building **one place for all dev tools, cheat codes, and TLDRs* — a free, open-source hub where developers can quickly find and use tools without any hassle of searching all over the internet.
So I was working on a tool I built to generate interactive API docs by consuming a Git repository.
Every generation gets tagged with a job_id
, and everything goes well—until it doesn't.
Sometimes, the server would crash.
Not your classic panic("oops")
.
I'm talking about real crashes: slice index out of range, nil pointer dereference—stuff t
hat doesn’t announce itself politely.
What I Needed
Something like:
- Catch panics
- Log the job ID that was last printed to stdout
- Capture the stack trace
- Save everything to a file for post-mortem
⚡️ The Hack
I ended up redirecting os.Stdout
into a buffer, searching for the last printed job ID using a regex, and then dumping all of that to a log file.
Here’s how I did it:
✨ errorcodes/panic_logger.go
var (
outputBuffer bytes.Buffer
bufferLock sync.Mutex
)
func init() {
pipeReader, pipeWriter, err := os.Pipe()
if err != nil {
log.Fatal().Err(err).Msg("Failed to create pipe")
}
os.Stdout = pipeWriter
go func() {
scanner := bufio.NewScanner(pipeReader)
for scanner.Scan() {
line := scanner.Text()
bufferLock.Lock()
outputBuffer.WriteString(line + "\n")
bufferLock.Unlock()
fmt.Fprintln(os.Stderr, line) // still print to terminal
}
}()
}
This intercepts every fmt.Println()
you do.
🔍 Extract Last Job ID
func FindLastJobID() string {
bufferLock.Lock()
defer bufferLock.Unlock()
lines := strings.Split(outputBuffer.String(), "\n")
jobRegex := regexp.MustCompile(`["|'](?:job_id|jobId)["|']\s*:\s*["|'](\d+)["|']`)
for i := len(lines) - 1; i >= 0; i-- {
match := jobRegex.FindStringSubmatch(lines[i])
if len(match) > 1 {
return match[1]
}
}
return ""
}
This regex will match both job_id
and jobId
in logs like:
INFO: {"job_id": "12345", "status": "preparing"}
💣 Logging the Panic
func LogPanic(logFile *os.File, r interface{}) {
time.Sleep(100 * time.Millisecond)
jobID := FindLastJobID()
timestamp := time.Now().Format("2006/01/02 15:04:05")
// Also capture context line
var lastJobLine string
if jobID != "" {
bufferLock.Lock()
lines := strings.Split(outputBuffer.String(), "\n")
jobLineRegex := regexp.MustCompile(fmt.Sprintf(`.*job[_]?[iI]d["']?\s*:\s*["']?%s["']?.*`, jobID))
for i := len(lines) - 1; i >= 0; i-- {
if jobLineRegex.MatchString(lines[i]) {
lastJobLine = lines[i]
break
}
}
bufferLock.Unlock()
}
fmt.Fprintf(logFile, "\n[%s] PANIC (JobID: %s): %v\n", timestamp, jobID, r)
if lastJobLine != "" {
fmt.Fprintf(logFile, "Last job context: %s\n", lastJobLine)
}
fmt.Fprintf(logFile, "Stack trace:\n%s\n", debug.Stack())
fmt.Fprintf(logFile, "----------------------------------------\n")
logFile.Sync()
log.Error().
Str("jobID", jobID).
Interface("panic", r).
Str("stack", string(debug.Stack())).
Msg("Recovered from panic")
}
🔐 Middleware Time
func RecoverMiddleware() echo.MiddlewareFunc {
logFile, _ := os.OpenFile("logs/panic.logs", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
defer func() {
if r := recover(); r != nil {
errorcodes.LogPanic(logFile, r)
c.Error(fmt.Errorf("%v", r))
}
}()
return next(c)
}
}
}
🧪 Testing It
Here's a route that crashes intentionally:
func main() {
e := echo.New()
e.Use(RecoverMiddleware())
e.GET("/health", func(c echo.Context) error {
log.Warn().Msg("Job ID is missing in the request")
fmt.Println("UI LOG: [JobID: 1234]: Preparing files and data for the subproject 0")
var a []string
_ = a[5] // This will cause a runtime panic (index out of range)
return nil
})
log.Info().Msg("Server listening on :8082")
if err := e.Start(":8082"); err != nil {
log.Fatal().Err(err).Msg("server failed")
}
}
Check logs/panic.logs
and you'll see:
[2025/04/07 23:30:38] PANIC (JobID: 12557): runtime error: index out of range [5] with length 0
Last job context: [90m11:20PM[0m DBG Retrieved manifest from Redis [36mmanifest=[0m{"projects":[{"sub_project":"Api Documentation", "repository_name":"Flowise"} ...trimmed]
Stack trace:
goroutine 3331 [running]:
runtime/debug.Stack()
/usr/local/go/src/runtime/debug/stack.go:26 +0x6b
...
...
...
/usr/local/go/src/net/http/server.go:2102 +0x1b75
created by net/http.(*Server).Serve in goroutine 56
/usr/local/go/src/net/http/server.go:3454 +0xa9a
🧯 Why This Helped
- I instantly knew which job triggered the crash.
- I got full tracebacks.
- Logs had context from stdout, not just stack traces.
- It didn’t require changing existing log lines — just redirected and parsed them.
🔍 But… is there a better way?
This works great for quick debugging in dev/staging — especially when logs are all over the place.
But long-term? Maybe not ideal.
- Should we wrap logs with structured logging (zerolog, zap, logrus) from the start?
- Should we propagate job IDs with context instead of parsing stdout?
- Should the panic handler be integrated with monitoring tools like Sentry or Prometheus?
This hack works — but what's your go-to way of logging panics for quick debugging in Go?
Drop your tricks. I'm listening. 👀
I’ve been building FreeDevTools.
A collection of UI/UX-focused tools crafted to simplify workflows, save time, and reduce friction in searching tools/materials.
Any feedback or contributors are welcome!
It’s online, open-source, and ready for anyone to use.
👉 Check it out: FreeDevTools
⭐ Star it on GitHub: freedevtools
Let’s make it even better together.
Top comments (0)