- Book: The Complete Guide to Go Programming
- Also by me: Thinking in Go (2-book series) — Complete Guide to Go Programming + Hexagonal Architecture in Go
- My project: Hermes IDE | GitHub — an IDE for developers who ship with Claude Code and other AI coding tools
- Me: xgabriel.com | GitHub
You wrote a small piece of HTTP middleware. It enforces a per-request deadline using a select and time.After. The code reads cleanly. Code review came back without a comment. Looks like every other example in every Go talk you've watched.
A week later the service is using twice the memory it used on day one, and the heap profile points at time.runtimeTimer. Not your handlers. Not your cache. The standard library's timer struct, allocated thousands of times per second, hanging around in old generations of memory long after the request that produced it has returned to the client.
You did not write a memory leak. You called time.After. On Go 1.22 and earlier, those are the same thing in a hot path.
The shape of the bug
Here is the middleware. Pretend it is yours, because it has been somebody's:
func WithTimeout(
next http.Handler,
d time.Duration,
) http.Handler {
return http.HandlerFunc(
func(
w http.ResponseWriter,
r *http.Request,
) {
done := make(chan struct{})
go func() {
next.ServeHTTP(w, r)
close(done)
}()
select {
case <-done:
return
case <-time.After(d):
http.Error(
w,
"timeout",
http.StatusGatewayTimeout,
)
}
},
)
}
Read it. The time.After(d) call returns a channel. If the handler completes before d elapses, the <-done case wins and the function returns. The timeout case is abandoned.
Abandoned does not mean freed.
time.After is documented as a thin wrapper. Go to the source in the standard library and you will find this:
func After(d Duration) <-chan Time {
return NewTimer(d).C
}
Every call allocates a fresh *Timer. The runtime keeps a reference to that timer in its internal heap of pending timers, sorted by fire time. Until the timer fires, the runtime's reference is alive. The garbage collector cannot reclaim it.
In a request that returns in 5ms with d = 30s, your timer sits in the runtime's heap for the next 30 seconds, doing nothing. Multiply that by ten thousand requests per second and you have three hundred thousand idle timers in memory at any moment, each one carrying its own channel and a runtime-heap entry the GC cannot touch.
Why the channel keeps it alive
Two things hold a timer alive in pre-1.23 Go.
The runtime's pending-timer heap is one. The other is the channel. time.After returns that channel as its only output, but time.NewTimer returns the whole Timer value with the channel as a field. The channel exists in both shapes. Either way, the runtime treats the timer's expiry as a real event it has to deliver, regardless of whether anything is reading.
When the timer eventually fires, the runtime sends one Time value into the buffered channel and lets go. At that point garbage collection can finally clean up. Until that point the timer is rooted by the runtime, and the "select abandoned the channel" intuition does not apply.
This is the part that surprises people. Letting a goroutine drop a channel does not free anything if the runtime is the one holding the reference.
The Go 1.23 change, accurately
Go 1.23 changed timer behaviour with two related fixes. The release notes are worth reading; the change is narrower than headlines suggest.
First: timers and tickers that are no longer referred to by the program become eligible for garbage collection immediately, even if their Stop methods have not been called. Earlier versions of Go did not collect unstopped timers until after they had fired and never collected unstopped tickers.
Second: the timer channel is now unbuffered, with capacity 0. This gives Reset and Stop a stronger guarantee — no stale values from before the call will be received after it.
The first change is what helps the time.After case. On Go 1.23 with go 1.23.0 (or later) in your go.mod, an abandoned time.After timer is reachable only through the runtime's internal scheduling, which the runtime now treats as not blocking GC for unreferenced timers. The pile-up that used to last d seconds is largely gone.
The go directive in go.mod matters here, not just the toolchain. A service running on a 1.23 toolchain but still pinned to go 1.21 in go.mod keeps the old timer semantics. That is the most common gotcha when a team upgrades and the leak does not budge.
That is real progress. It is also not a reason to keep writing time.After in a hot loop.
Plenty of services still run 1.21 and 1.22, where the original behaviour is intact and the leak is real. The GC eligibility change also does not help when the program is still referencing the timer: if you store the channel from time.After anywhere, in a struct, a closure, a map keyed by request ID, you are back to keeping the timer alive yourself. The 1.23 fix only helps when nothing else holds on. And even on 1.23, allocating a timer per request still costs an allocation. In a path that runs a hundred thousand times a second, the Timer struct, the channel, the closure inside the runtime, all add up. Reusing a timer with Reset is cheaper than allocating a new one even when GC is willing to clean up after you.
The fix
The shape that does not leak on any Go version, and is also faster, looks like this:
func WithTimeout(
next http.Handler,
d time.Duration,
) http.Handler {
return http.HandlerFunc(
func(
w http.ResponseWriter,
r *http.Request,
) {
t := time.NewTimer(d)
defer t.Stop()
done := make(chan struct{})
go func() {
next.ServeHTTP(w, r)
close(done)
}()
select {
case <-done:
return
case <-t.C:
http.Error(
w,
"timeout",
http.StatusGatewayTimeout,
)
}
},
)
}
Three changes:
-
time.NewTimer(d)instead oftime.After(d). You hold the*Timer. -
defer t.Stop()so that whicheverselectcase wins, the timer is told to stop. On pre-1.23 Go this is what releases the runtime's reference. On 1.23 it remains the polite, idiomatic shape. -
case <-t.Creads from the timer's own channel.
t.Stop() returning false means the timer already fired. That is fine here because if the timer fired, the timeout branch handled it. If the goroutine completed first, Stop returns true, the runtime drops the timer, and we never see a value on t.C. The middleware case is safe because the timeout branch is the only place that observes a fire; in a loop where the same timer is reused, the stale-value drain matters and the snippet below covers it.
Reusing the timer in a real loop
Middleware is the simple case because the handler runs once per timer. The pattern that benefits most from Reset is a long-running loop that polls or waits with a deadline:
func pollUntil(
ctx context.Context,
interval time.Duration,
check func() bool,
) error {
t := time.NewTimer(interval)
defer t.Stop()
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-t.C:
if check() {
return nil
}
t.Reset(interval)
}
}
}
One timer for the whole loop. Reset puts it back on the runtime's heap with a new fire time. No allocation per iteration. No abandoned timers piling up. This is the shape every long-lived select loop in your service should use.
A subtlety on pre-1.23 Go: if you Reset a timer that has already fired but whose value has not yet been drained from t.C, the next receive can deliver a stale value. The fix is to drain the channel in the Stop-returns-false case before calling Reset:
if !t.Stop() {
<-t.C
}
t.Reset(interval)
On Go 1.23 this is no longer necessary because the channel is unbuffered, which is the second part of the release-notes change. If you are stuck on older versions, the Go wiki page on the timer changes covers the drain pattern in detail.
How to spot it in a profile
The first sign in a heap profile is time.NewTimer showing up as a meaningful share of allocations under runtime.startTimer or time.startTimer. Run:
go tool pprof -alloc_objects \
http://localhost:6060/debug/pprof/heap
This assumes you have _ "net/http/pprof" imported on a debug listener. Then top and look for time.NewTimer in the call graph. If you see thousands of allocations per second from a function whose name says "middleware", "client", or "loop", that is your tell. Cross-check with goroutine and block profiles if you suspect the timers are also keeping channels alive.
The other tell is the steady-state heap. A service that handles a fixed rate of traffic should reach a flat resident memory after warm-up. If the curve keeps creeping for the first thirty seconds and then plateaus exactly when your slowest timer would fire, you found it.
The rule
Inside a select loop or any code path that runs more than a handful of times per second, do not use time.After. Reach for time.NewTimer plus defer t.Stop(), and use t.Reset(d) to reuse the timer if the loop continues. On Go 1.23 the pile-up is mostly fixed by the runtime, but the allocation cost and older-version compatibility are still worth the extra lines. time.After is fine at the top of a program, in tests, in a one-shot script, anywhere it runs once and the goroutine exits soon after. Below that, it is a footgun the standard library has been quietly trying to defuse for years.
If this saved you a debugging session
The runtime's timer heap is one of those parts of Go that does not appear in any obvious diagram of the language and ends up shaping production behaviour anyway. Allocations, escape analysis, the way goroutines hold references to channels, the difference between "the program is done with this" and "the runtime is done with this": they all sit underneath the surface API. They decide whether your service flatlines or grows. The Complete Guide to Go Programming covers the runtime model in the same plain way it covers the language, with runnable examples for the cases the standard library does not document loudly.
If you ship Go alongside an AI coding assistant, Hermes IDE is the editor I build for that workflow, designed for the loop where the AI reads and edits your Go code with you, not at you.

Top comments (0)