Every CLI tool eventually needs the same thing: show the user that something is happening, print progress, and clean up the output when it's done. The straightforward approach — scattered fmt.Printf calls around your business logic — works until you need it to look good in a real terminal, be silent in CI, and emit OpenTelemetry spans in production. Then it stops scaling.
This post walks through a pattern for wrapping long-running functions in Go so all of that comes for free.
The core idea
Instead of calling your function directly, you pass it as a callback:
err := taskglow.Wrap(ctx, "Deploying application", func(ctx context.Context, t *taskglow.Task) error {
t.Log("connecting to server")
// ... do work ...
t.Progress(0.5, "uploading artifacts")
// ... do more work ...
return nil
})
The wrapper owns the output. Your function just reports what it is doing. In a real terminal you get a spinner with a live progress bar; in a CI pipe you get timestamped plain-text lines — same code, no if isatty anywhere.
» Deploying application
⠿ uploading artifacts
[████████████░░░░░░░░] 60%
✓ Deploying application 2.3s
[10:41:22] » Deploying application
[10:41:22] connecting to server
[10:41:23] uploading artifacts
[10:41:25] ✓ Deploying application 2.3s
Designing the Task type
The callback receives a *Task. It is deliberately thin — no output, no formatting. It is a handle for the caller to push structured events upward:
type Task struct {
ctx context.Context
cancel context.CancelCauseFunc
renderer Renderer
onLog func(string)
onWarn func(string)
}
func (t *Task) Log(msg string) { /* record, render, fire hook */ }
func (t *Task) Logf(format string, args ...any) { t.Log(fmt.Sprintf(format, args...)) }
func (t *Task) Warn(msg string) { /* record warning */ }
func (t *Task) Progress(pct float64, msg string) { /* 0.0–1.0 */ }
func (t *Task) Stage(name string, i, total int) { /* "step 2/4" */ }
func (t *Task) Fail(err error) { t.cancel(err) }
func (t *Task) Context() context.Context { return t.ctx }
Fail cancels the internal context with a cause. The wrapper reads that cause after the function returns and surfaces it as the error. This means the caller can abort from anywhere inside a deeply nested call stack without carrying an extra return value.
Environment detection
The renderer is chosen before the function is called and never changes mid-flight. The logic is straightforward:
func buildRenderer(opts options) Renderer {
switch opts.mode {
case ModeTTY:
return newTTYRenderer(opts)
case ModePlain:
return newPlainRenderer(opts)
default: // ModeAuto
if terminal.IsTerminal(opts.writer) {
return newTTYRenderer(opts)
}
return newPlainRenderer(opts)
}
}
terminal.IsTerminal wraps golang.org/x/term. That is the only external dependency for core rendering.
Parallel tasks
The same pattern extends to concurrent work. A Group runs tasks in parallel and renders each as its own row:
grp := taskglow.NewGroup(ctx)
grp.Go("Build frontend", buildFrontend)
grp.Go("Build backend", buildBackend)
grp.Go("Run migrations", runMigrations)
err := grp.Wait()
In TTY mode each row has its own spinner, progress bar and last-log line, all updated at the same tick. In plain mode each line is prefixed with the task title so logs from concurrent tasks stay readable.
The TTY renderer uses a single goroutine that repaints the entire multi-row block on each tick — no per-row goroutine, no partial-write tearing.
Hooks for observability
Sometimes the caller needs to react to events without owning the output. Three hooks cover the common cases:
runner := taskglow.New(
taskglow.WithOnLog(func(msg string) {
slog.Info(msg, "task", title)
}),
taskglow.WithOnWarn(func(msg string) {
slog.Warn(msg)
}),
taskglow.WithOnFinish(func(s taskglow.Summary) {
metrics.RecordDuration("task.duration", s.Elapsed,
"state", s.State.String())
}),
)
Summary carries the final state, elapsed time, collected logs, warnings, and error. It is the same value the OnFinish hook, a log file writer, and an OpenTelemetry adapter all receive.
OpenTelemetry without changes to business logic
The OTel adapter composes the hooks above. It opens a span, wires up the three callbacks, and calls the standard runner:
func (r *Runner) Run(ctx context.Context, title string, fn TaskFunc) error {
ctx, span := r.tracer.Start(ctx, title)
defer span.End()
return taskglow.New(append(r.opts,
taskglow.WithOnLog(func(msg string) { span.AddEvent(msg) }),
taskglow.WithOnWarn(func(msg string) {
span.AddEvent(msg, trace.WithAttributes(
attribute.Bool("warning", true),
))
}),
taskglow.WithOnFinish(func(s taskglow.Summary) {
span.SetAttributes(
attribute.String("task.state", s.State.String()),
attribute.String("task.elapsed", taskglow.FormatElapsed(s.Elapsed)),
)
if s.State == taskglow.StateFailed && s.Err != nil {
span.RecordError(s.Err)
span.SetStatus(codes.Error, s.Err.Error())
} else {
span.SetStatus(codes.Ok, "")
}
}),
)...).Run(ctx, title, fn)
}
The span context is passed into fn as ctx, so any child spans created inside the callback nest naturally. The business logic does not know it is being traced.
Adapter pattern for existing APIs
The same composition approach works for standard library boundaries. A Cobra adapter turns a command into a task:
cmd := &cobra.Command{
RunE: cobraadapter.RunE("Deploying", func(ctx context.Context, t *taskglow.Task, cmd *cobra.Command, args []string) error {
return deploy(ctx, t, args[0])
}),
}
An os/exec adapter streams subprocess output as log events:
result, err := execadapter.Run(ctx, t, "go", "build", "./...")
An HTTP adapter wraps a handler and records duration per request:
mux.HandleFunc("/deploy", httpadapter.Handler("API deploy", deployHandler))
Each adapter is ~50 lines. None of them contain rendering code — they delegate entirely to the core runner via its public option API.
Key design decisions
The renderer is injected, not global. Tests pass a Plain renderer pointing at a bytes.Buffer. Real programs use ModeAuto. There is no package-level state to reset between tests.
The context is internal. The wrapper creates a child context and hands it to the callback. Cancellation from the caller propagates in; cancellation from inside the callback (t.Fail) propagates out as a typed error. Neither direction leaks.
Goroutine lifecycle is deterministic. The TTY spinner goroutine is started by Start() and stopped by Stop(). The shutdown sequence — close stop channel → ticker.Stop() → wg.Wait() → throttle stop — is fixed and tested with -race. Callers do not manage goroutines.
No global state, no init(). The library registers nothing at startup. A program can create multiple concurrent Runner instances pointing at different writers without any coordination.
Conclusion
The pattern — accept a callback, own the output, expose hooks for observability — composes cleanly with the rest of the Go standard library. Adding a spinner to a Cobra command, streaming a subprocess, or emitting OTel spans are all one adapter away from the same business logic, with no changes to the function being wrapped.
The library described here is available at github.com/lignumqt/taskglow.
Top comments (0)