DEV Community

Cover image for From Zero to Hero: Building a Waiting Room with `room`, `figtree`, and `verbose`
Andrei Merlescu
Andrei Merlescu

Posted on

From Zero to Hero: Building a Waiting Room with `room`, `figtree`, and `verbose`

When your Go service gets hit by a traffic spike, you have three options: drop requests with a 429, queue them blindly with no ordering guarantee, or give every user a ticket, show them their position, and admit them in the order they arrived. The third option is what room does. This tutorial wires it together with figtree for runtime-adjustable configuration and verbose for log output that is provably safe to hand to a support engineer — even after issuing paid VIP pass tokens.

By the end you will have a running four-page Gin web application where the 6th concurrent request sees a live queue page, capacity adjusts from the environment without a restart, and every VIP pass token issued is scrubbed from the log file before it touches disk.

All three packages are written by Andrei Merlescu. The source for room including a complete sample application lives at github.com/andreimerlescu/room.


Step 1 — Scaffold and dependencies

room is the FIFO waiting room middleware for Gin, built on sema. figtree is the configuration resolver — file, then env, then CLI flags, with validators and live mutation tracking. verbose is the logger that scrubs registered secrets from every line before it hits disk.

mkdir basicwebapp && cd basicwebapp
go mod init github.com/example/basicwebapp
go get github.com/andreimerlescu/room@latest
go get github.com/andreimerlescu/figtree/v2@latest
go get github.com/andreimerlescu/verbose@latest
go get github.com/gin-gonic/gin@latest
Enter fullscreen mode Exit fullscreen mode

Create the files you will fill in across all remaining steps:

touch main.go config.yml
Enter fullscreen mode Exit fullscreen mode

Your go.mod should look like this:

module github.com/example/basicwebapp

go 1.22

require (
    github.com/andreimerlescu/figtree/v2  v2.0.14
    github.com/andreimerlescu/room        v1.0.0
    github.com/andreimerlescu/verbose     v0.2.0
    github.com/gin-gonic/gin             v1.9.1
)
Enter fullscreen mode Exit fullscreen mode

Your directory:

basicwebapp/
├── go.mod
├── go.sum
├── config.yml
└── main.go
Enter fullscreen mode Exit fullscreen mode

Nothing runs yet. That is fine.


Step 2 — CLI design with figtree

Before any server, any logger, any middleware — design the configuration. Every value the waiting room needs at runtime should be config-file-first, env-var-overridable, and named as a constant so typos cannot silently create a second key that is never read.

Add this to main.go:

// main.go — complete file at this stage

package main

import (
    "fmt"
    "os"
    "time"

    "github.com/andreimerlescu/figtree/v2"
)

// config keys — always constants, never raw strings at call sites
const (
    kPort            = "port"
    kLogDir          = "log-dir"
    kTruncate        = "truncate"
    kCap             = "room-cap"             // env: ROOM_CAP
    kReaperInterval  = "room-reaper-interval" // env: ROOM_REAPER_INTERVAL
    kRatePerPosition = "room-rate-per-pos"    // env: ROOM_RATE_PER_POS
    kPassDuration    = "room-pass-duration"   // env: ROOM_PASS_DURATION
    kSkipURL         = "room-skip-url"        // env: ROOM_SKIP_URL
)

func main() {
    // config-file-first — all values live in config.yml.
    // env vars override the file. CLI flags override env vars.
    figs := figtree.With(figtree.Options{
        Tracking:   true,
        Germinate:  true,
        Pollinate:  true,
        ConfigFile: "./config.yml",
    })

    // -- server --
    figs.NewInt(kPort, 8080, "port to listen on")
    figs.NewString(kLogDir, "./logs", "directory to write log files into")
    figs.NewBool(kTruncate, false, "truncate log file on each run")

    // -- room --
    figs.NewInt(kCap, 5, "max concurrent requests the room admits")
    // reaper-interval: integer multiplied by time.Second
    figs.NewUnitDuration(kReaperInterval, 10, time.Second,
        "how often the reaper evicts abandoned queue tickets")
    // rate-per-pos: float — $2.50 per position to skip the line
    figs.NewFloat64(kRatePerPosition, 2.50, "per-position cost to skip the queue ($)")
    // pass-duration: integer multiplied by time.Minute
    figs.NewUnitDuration(kPassDuration, 90, time.Minute,
        "how long a VIP pass stays valid after a skip-the-line payment")
    figs.NewString(kSkipURL, "/queue/purchase",
        "URL the waiting room page sends the user to for skip-the-line payment")

    // Problems() catches developer mistakes — duplicate keys, bad
    // validator combos — before Load() runs. These are your bugs.
    if problems := figs.Problems(); len(problems) > 0 {
        for _, p := range problems {
            fmt.Fprintf(os.Stderr, "figtree problem: %v\n", p)
        }
        os.Exit(1)
    }

    // Load resolves: config.yml → env vars → CLI flags
    if err := figs.Load(); err != nil {
        fmt.Fprintf(os.Stderr, "figtree.Load: %v\n", err)
        os.Exit(1)
    }

    // Print everything so you can verify what loaded — fmt.Println for now.
    // verbose replaces this in Step 4.
    fmt.Printf("port=%d log-dir=%s cap=%d reaper=%s rate=%.2f pass=%s skip-url=%s\n",
        *figs.Int(kPort),
        *figs.String(kLogDir),
        *figs.Int(kCap),
        *figs.Duration(kReaperInterval),
        *figs.Float64(kRatePerPosition),
        *figs.Duration(kPassDuration),
        *figs.String(kSkipURL),
    )
}
Enter fullscreen mode Exit fullscreen mode

Create config.yml:

port:                 8080
log-dir:              "./logs"
truncate:             false
room-cap:             5
room-reaper-interval: 10
room-rate-per-pos:    2.50
room-pass-duration:   90
room-skip-url:        "/queue/purchase"
Enter fullscreen mode Exit fullscreen mode

Run it:

go run .
Enter fullscreen mode Exit fullscreen mode

You should see all values printed. Try overriding cap from the environment:

ROOM_CAP=10 go run .
Enter fullscreen mode Exit fullscreen mode

The cap changes without touching config.yml. That is the priority chain working.

Mistake trap: Set room-cap: 0 in config.yml and run. figtree loads it without complaint at this stage — validators are not wired yet. You will fix this in Step 3 and 0 will be rejected before the server starts.


Step 3 — Validators, Problems(), and mutations

Now that the tree is declared, lock it down. Validators reject bad values before the server starts. The mutations goroutine is where the live capacity adjustment lives — when ROOM_CAP changes in the environment, wr.SetCap is called immediately, no restart required.

Add validators after each New*() call:

figs.NewInt(kPort, 8080, "port to listen on")
figs.WithValidator(kPort, figtree.AssureIntInRange(1024, 65535))
Enter fullscreen mode Exit fullscreen mode

figs.NewString(kLogDir, "./logs", "directory to write log files into")
figs.WithValidator(kLogDir, figtree.AssureStringNotEmpty)
Enter fullscreen mode Exit fullscreen mode

figs.NewInt(kCap, 5, "max concurrent requests the room admits")
// cap must be at least 1; room.Init rejects 0
figs.WithValidator(kCap, figtree.AssureIntGreaterThan(0))
figs.WithValidator(kCap, figtree.AssureIntLessThan(10001))
Enter fullscreen mode Exit fullscreen mode

figs.NewUnitDuration(kReaperInterval, 10, time.Second,
    "how often the reaper evicts abandoned queue tickets")
// room accepts 5s–24h; match its constraints here
figs.WithValidator(kReaperInterval, figtree.AssureDurationMin(5*time.Second))
figs.WithValidator(kReaperInterval, figtree.AssureDurationMax(24*time.Hour))
Enter fullscreen mode Exit fullscreen mode

figs.NewFloat64(kRatePerPosition, 2.50, "per-position cost to skip the queue ($)")
// rate must be positive
figs.WithValidator(kRatePerPosition, figtree.AssureFloat64Positive)
Enter fullscreen mode Exit fullscreen mode

figs.NewUnitDuration(kPassDuration, 90, time.Minute,
    "how long a VIP pass stays valid after a skip-the-line payment")
// room accepts 1m–24h; match its constraints
figs.WithValidator(kPassDuration, figtree.AssureDurationMin(1*time.Minute))
figs.WithValidator(kPassDuration, figtree.AssureDurationMax(24*time.Hour))
Enter fullscreen mode Exit fullscreen mode

figs.NewString(kSkipURL, "/queue/purchase",
    "URL the waiting room page sends the user to for skip-the-line payment")
figs.WithValidator(kSkipURL, figtree.AssureStringNotEmpty)
Enter fullscreen mode Exit fullscreen mode

Now add the mutations goroutine after figs.Load(). The wr variable is declared as a package-level var — we need it accessible from the goroutine and from main:

if err := figs.Load(); err != nil {
    fmt.Fprintf(os.Stderr, "figtree.Load: %v\n", err)
    os.Exit(1)
}

// mutations goroutine starts AFTER Load()
// because the channel is only meaningful once the tree is live
go func() {
    for m := range figs.Mutations() {
        // verbose.Printf replaces fmt.Println in Step 4
        fmt.Printf("config mutation: %s changed from %v to %v at %s\n",
            m.Property, m.Old, m.New, m.When)

        // live capacity adjustment — when ROOM_CAP changes in the
        // environment, the room adjusts immediately.
        // NOTE: shrinking capacity drains in-flight work first via
        // sema.SetCap — expect a brief pause in admissions when
        // reducing cap under load. This is intentional and safe.
        if m.Property == kCap {
            if newCap, ok := m.New.(int); ok && wr != nil {
                if err := wr.SetCap(int32(newCap)); err != nil {
                    fmt.Printf("wr.SetCap: %v\n", err)
                }
            }
        }
    }
}()
Enter fullscreen mode Exit fullscreen mode

Add wr as a package-level variable above main:

var wr *room.WaitingRoom
Enter fullscreen mode Exit fullscreen mode

Add room to imports:

"github.com/andreimerlescu/room"
Enter fullscreen mode Exit fullscreen mode

Run it and try the live cap adjustment — open a second terminal while the server is running (after Step 5 wires up the HTTP server) and run:

export ROOM_CAP=10
Enter fullscreen mode Exit fullscreen mode

Within one poll cycle you will see the mutation log entry appear and the room silently expand.

Mistake trap: Move the mutations goroutine to before figs.Load() and run it. The channel exists but the tree has not resolved its values yet — mutations fired during load will not be seen. Always start the mutations goroutine after Load() returns.


Step 4 — Wire verbose, replace fmt

Now that figtree is loading and validating correctly, wire the logger. verbose must be initialised before any log call — the moment it is up, all subsequent log lines through verbose.Printf are scrubbed against the registered secrets registry.

Add verbose to imports:

"github.com/andreimerlescu/verbose"
Enter fullscreen mode Exit fullscreen mode

Add verbose initialisation right after the mutations goroutine:

if err := verbose.NewLogger(verbose.Options{
    Dir:      *figs.String(kLogDir),
    Name:     "basicwebapp",
    Truncate: *figs.Bool(kTruncate),
    DirMode:  0o755,
    FileMode: 0o640,
}); err != nil {
    fmt.Fprintf(os.Stderr, "verbose.NewLogger: %v\n", err)
    os.Exit(1)
}
verbose.Printf("basicwebapp starting (verbose v%s / figtree v%s)",
    verbose.VERSION, figtree.Version())
Enter fullscreen mode Exit fullscreen mode

Replace the fmt.Printf startup summary:

// remove this
fmt.Printf("port=%d log-dir=%s cap=%d reaper=%s rate=%.2f pass=%s skip-url=%s\n",
    *figs.Int(kPort),
    *figs.String(kLogDir),
    *figs.Int(kCap),
    *figs.Duration(kReaperInterval),
    *figs.Float64(kRatePerPosition),
    *figs.Duration(kPassDuration),
    *figs.String(kSkipURL),
)

// replace with this
verbose.Printf("config: port=%d log-dir=%s cap=%d reaper=%s rate=%.2f pass=%s skip-url=%s",
    *figs.Int(kPort),
    *figs.String(kLogDir),
    *figs.Int(kCap),
    *figs.Duration(kReaperInterval),
    *figs.Float64(kRatePerPosition),
    *figs.Duration(kPassDuration),
    *figs.String(kSkipURL),
)
Enter fullscreen mode Exit fullscreen mode

Replace the fmt.Printf in the mutations goroutine:

// remove this
fmt.Printf("config mutation: %s changed from %v to %v at %s\n",
    m.Property, m.Old, m.New, m.When)

// replace with this
verbose.Printf("config mutation: %s changed from %v to %v at %s",
    m.Property, m.Old, m.New, m.When)
Enter fullscreen mode Exit fullscreen mode

Add the roomLog helper below main — all room lifecycle events flow through here:

// roomLog writes a room event line to verbose so it appears in the log
// file with a consistent tag. Filter by tag in the shell:
//
//   tail -f logs/basicwebapp.log | grep '\[FULL\]'
//   tail -f logs/basicwebapp.log | grep '\[QUEUE\]'
func roomLog(tag, msg string) {
    verbose.Printf("[ %s ] %s", tag, msg)
}
Enter fullscreen mode Exit fullscreen mode

Mistake trap: Call verbose.Printf("test") before verbose.NewLogger — move it above the NewLogger block and run. You will see the message printed to stderr with "NewLogger or SetLogger has not been called". verbose does not panic — it fails open to stderr. Move the call back below NewLogger.


Step 5 — Initialise the WaitingRoom

With logging solid, initialise the room. Two things matter here: use gin.New() not gin.Default(), and initialise the room before registering any routes.

Why gin.New() not gin.Default(): gin.Default() installs gin's own Logger middleware, which buffers each log line and prints it after the handler returns. During load testing that means you see nothing until the request is already complete — room events and request logs appear out of order and the queue activity is invisible in real time. gin.New() gives you a blank engine. You install gin.Recovery() manually and add your own logger that prints on entry and exit.

Add the Gin and room setup after the verbose startup log:

r := gin.New()
r.Use(gin.Recovery())
r.Use(requestLogger()) // prints on entry AND exit — see helper below

// initialise the WaitingRoom with the cap from figtree
wr = &room.WaitingRoom{}
if err := wr.Init(int32(*figs.Int(kCap))); err != nil {
    verbose.TracefReturn("room.Init: %v", err)
    os.Exit(1)
}
defer wr.Stop()

// apply all runtime-adjustable settings from figtree
if err := wr.SetReaperInterval(*figs.Duration(kReaperInterval)); err != nil {
    verbose.TracefReturn("wr.SetReaperInterval: %v", err)
    os.Exit(1)
}
if err := wr.SetPassDuration(*figs.Duration(kPassDuration)); err != nil {
    verbose.TracefReturn("wr.SetPassDuration: %v", err)
    os.Exit(1)
}

// rate function reads kRatePerPosition from figtree on every call
// so it stays current if the value changes via Pollinate
wr.SetRateFunc(func(depth int64) float64 {
    return *figs.Float64(kRatePerPosition)
})

wr.SetSkipURL(*figs.String(kSkipURL))
// SetSecureCookie defaults to false for local dev — set true in production
// behind TLS or a TLS-terminating proxy
wr.SetSecureCookie(false)
Enter fullscreen mode Exit fullscreen mode

Add the requestLogger helper below roomLog:

// requestLogger returns a Gin middleware that prints on request arrival
// and again on completion, so you can see room events interleaved with
// request lifecycle in real time during load tests.
func requestLogger() gin.HandlerFunc {
    return func(c *gin.Context) {
        // skip the status polling endpoint — it fires every 3s per
        // queued client and would bury room events in noise
        if c.Request.URL.Path == "/queue/status" {
            c.Next()
            return
        }
        start := time.Now()
        verbose.Printf("[ REQ ] --> %s %s  remote=%s",
            c.Request.Method, c.Request.URL.Path, c.ClientIP())
        c.Next()
        verbose.Printf("[ REQ ] <-- %s %s  status=%d  latency=%s",
            c.Request.Method, c.Request.URL.Path,
            c.Writer.Status(), time.Since(start).Round(time.Millisecond))
    }
}
Enter fullscreen mode Exit fullscreen mode

Mistake trap: Use gin.Default() instead of gin.New(). Run the server and generate load with ab -c 30 -n 200 http://localhost:8080/about. Watch the logs — room events ([FULL], [QUEUE], [ENTER]) arrive in batches after each handler completes rather than in real time. You cannot see the queue building. Switch to gin.New() and the events appear as they happen.


Step 6 — Lifecycle callbacks

Callbacks are what you see in the log during a load test. Register all of them before wr.RegisterRoutes(r) so no events are missed during the startup surge.

Add this block after the wr.SetSecureCookie call:

// register ALL callbacks before RegisterRoutes
wr.On(room.EventFull, func(s room.Snapshot) {
    roomLog("FULL   ", fmt.Sprintf(
        "capacity reached  occupancy=%d/%d  queue=%d  util=%.0f%%",
        s.Occupancy, s.Capacity, s.QueueDepth, pct(s.Occupancy, s.Capacity),
    ))
})
wr.On(room.EventDrain, func(s room.Snapshot) {
    roomLog("DRAIN  ", fmt.Sprintf(
        "room no longer full  occupancy=%d/%d  queue=%d",
        s.Occupancy, s.Capacity, s.QueueDepth,
    ))
})
wr.On(room.EventQueue, func(s room.Snapshot) {
    roomLog("QUEUE  ", fmt.Sprintf(
        "request queued  depth=%d  occupancy=%d/%d  util=%.0f%%",
        s.QueueDepth, s.Occupancy, s.Capacity, pct(s.Occupancy, s.Capacity),
    ))
})
wr.On(room.EventEnter, func(s room.Snapshot) {
    roomLog("ENTER  ", fmt.Sprintf(
        "slot acquired  occupancy=%d/%d  queue=%d  util=%.0f%%",
        s.Occupancy, s.Capacity, s.QueueDepth, pct(s.Occupancy, s.Capacity),
    ))
})
wr.On(room.EventExit, func(s room.Snapshot) {
    roomLog("EXIT   ", fmt.Sprintf(
        "slot released  occupancy=%d/%d  queue=%d  util=%.0f%%",
        s.Occupancy, s.Capacity, s.QueueDepth, pct(s.Occupancy, s.Capacity),
    ))
})
wr.On(room.EventEvict, func(s room.Snapshot) {
    roomLog("EVICT  ", fmt.Sprintf(
        "ghost ticket removed  queue=%d  occupancy=%d/%d",
        s.QueueDepth, s.Occupancy, s.Capacity,
    ))
})
wr.On(room.EventTimeout, func(s room.Snapshot) {
    roomLog("TIMEOUT", fmt.Sprintf(
        "context cancelled before admission  occupancy=%d/%d  queue=%d",
        s.Occupancy, s.Capacity, s.QueueDepth,
    ))
})
wr.On(room.EventPromote, func(s room.Snapshot) {
    roomLog("PROMOTE", fmt.Sprintf(
        "client promoted to front  occupancy=%d/%d  queue=%d",
        s.Occupancy, s.Capacity, s.QueueDepth,
    ))
})
Enter fullscreen mode Exit fullscreen mode

Add the pct helper below requestLogger:

// pct converts occupancy/capacity to a display percentage.
func pct(occupancy, capacity int) float64 {
    if capacity == 0 {
        return 0
    }
    return float64(occupancy) / float64(capacity) * 100
}
Enter fullscreen mode Exit fullscreen mode

Mistake trap: Register an EventFull callback after wr.RegisterRoutes(r) and immediately fire a load test. If the room fills during RegisterRoutes, you miss the first transition. Always register callbacks before RegisterRoutes.


Step 7 — Routes, the waiting room, and runtime secret registration

This is the step where verbose earns its place. The moment PromoteTokenToFront returns a PassToken, that token is registered as a verbose secret before anything else touches it — before it is set as a cookie, before it is mentioned in any log line. The sequence is non-negotiable: register first, then do everything else.

The rule for what goes in logs: log metadata about the event (cost, whether a pass was issued, queue depth), never the token value or any substring of it. A truncated token prefix is not protected by verbose scrubbing because verbose matches the full registered value — logging token=%.8s... bypasses the scrubber entirely. If a value is sensitive enough to register with verbose, it is sensitive enough to keep entirely out of log lines.

Add routes and the server to main, after the callbacks block:

// payment routes — registered BEFORE RegisterRoutes so they bypass the queue
r.GET("/queue/purchase", handlePurchasePage)
r.POST("/queue/purchase/confirm", handlePurchaseConfirm)

// RegisterRoutes installs:
//   OPTIONS /queue/status  (CORS preflight)
//   GET     /queue/status  (polling endpoint for the waiting room page)
//   r.Use(wr.Middleware()) (gates every route registered after this line)
wr.RegisterRoutes(r)

// application routes — all gated by the waiting room
r.GET("/",        homePage)
r.GET("/about",   aboutPage)
r.GET("/pricing", pricingPage)
r.GET("/contact", contactPage)

addr := fmt.Sprintf(":%d", *figs.Int(kPort))
verbose.Printf("basicwebapp listening on http://localhost%s  cap=%d  rate=$%.2f/pos  pass=%s",
    addr, wr.Cap(), *figs.Float64(kRatePerPosition), wr.PassDuration())

if err := r.Run(addr); err != nil {
    verbose.TracefReturn("r.Run: %v", err)
    os.Exit(1)
}
Enter fullscreen mode Exit fullscreen mode

Add the payment handlers and page handlers below pct:

// handlePurchasePage shows the payment confirmation page.
// In production: redirect to a Stripe Checkout session.
func handlePurchasePage(c *gin.Context) {
    cookie, err := c.Request.Cookie("room_ticket")
    if err != nil || cookie.Value == "" {
        c.Data(http.StatusBadRequest, "text/html; charset=utf-8", page(
            "Error", "<h1>No queue ticket found</h1><a href='/'>← Back</a>",
        ))
        return
    }

    // check for an active VIP pass — no need to pay again
    if passCookie, err := c.Request.Cookie("room_pass"); err == nil {
        if wr.HasValidPass(passCookie.Value) {
            c.Data(http.StatusOK, "text/html; charset=utf-8", page(
                "VIP pass active",
                "<h1>Your VIP pass is still active</h1><p>You'll be auto-promoted — no payment needed.</p><a href='/'>← Back</a>",
            ))
            return
        }
    }

    cost, err := wr.QuoteCost(cookie.Value, 1)
    if err != nil {
        c.Data(http.StatusOK, "text/html; charset=utf-8", page(
            "Skip the line",
            fmt.Sprintf("<h1>Skip the line</h1><p>%s</p><a href='/'>← Back</a>", err.Error()),
        ))
        return
    }

    verbose.Printf("GET /queue/purchase — cost=$%.2f queue=%d", cost, wr.QueueDepth())
    c.Data(http.StatusOK, "text/html; charset=utf-8", purchasePage(cost, wr.PassDuration()))
}

// handlePurchaseConfirm processes the payment and promotes the token.
// In production: verify Stripe webhook signature before calling PromoteTokenToFront.
func handlePurchaseConfirm(c *gin.Context) {
    cookie, err := c.Request.Cookie("room_ticket")
    if err != nil || cookie.Value == "" {
        c.JSON(http.StatusBadRequest, gin.H{"error": "no room_ticket cookie"})
        return
    }

    result, err := wr.PromoteTokenToFront(cookie.Value)
    if err != nil {
        verbose.Printf("POST /queue/purchase/confirm — promotion failed: %v", err)
        c.Data(http.StatusOK, "text/html; charset=utf-8", page(
            "Payment failed",
            fmt.Sprintf("<h1>Something went wrong</h1><p>%s</p><a href='/'>← Back</a>", err.Error()),
        ))
        return
    }

    // ── THE KEY MOMENT ────────────────────────────────────────────────────
    // Register the VIP pass token with verbose FIRST — before the cookie
    // is set, before anything is logged, before the response is written.
    //
    // After this line, if the pass token value appears anywhere in any
    // log line this process writes, verbose replaces it with [VIP_PASS].
    //
    // Do NOT log the token value, any prefix of it, or any derivative.
    // verbose scrubs the full registered value — a truncated prefix escapes
    // the scrubber entirely. Log metadata only: cost, whether a pass was
    // issued, queue depth.
    if result.PassToken != "" {
        if err := verbose.AddSecret(verbose.SecretBytes(result.PassToken), "[VIP_PASS]"); err != nil {
            verbose.Printf("POST /queue/purchase/confirm — failed to protect pass token: %v", err)
        }
    }

    // safe to log now — pass token is scrubbed, metadata only
    verbose.Printf("POST /queue/purchase/confirm — promoted  cost=$%.2f  pass_issued=%v  queue=%d",
        result.Cost, result.PassToken != "", wr.QueueDepth())

    // set the VIP pass cookie so the client is auto-promoted on re-entry
    if result.PassToken != "" {
        http.SetCookie(c.Writer, &http.Cookie{
            Name:     "room_pass",
            Value:    result.PassToken,
            Path:     wr.CookiePath(),
            MaxAge:   int(wr.PassDuration().Seconds()),
            HttpOnly: true,
            Secure:   false, // set true in production behind TLS
            SameSite: http.SameSiteLaxMode,
        })
    }

    passMsg := ""
    if result.PassToken != "" {
        passMsg = fmt.Sprintf("<p>Your VIP pass is valid for <strong>%s</strong>.</p>",
            wr.PassDuration().Round(time.Minute))
    }
    c.Data(http.StatusOK, "text/html; charset=utf-8", page("Payment confirmed",
        fmt.Sprintf(`<h1>Payment confirmed — $%.2f</h1>
        <p>You've been moved to the front of the line!</p>%s
        <script>setTimeout(function(){ window.location.href = "/"; }, 2000);</script>
        <noscript><a href="/">← Click here</a></noscript>`, result.Cost, passMsg),
    ))
}

// ── page handlers — all gated by the waiting room ─────────────────────────

const simulatedLatency = 500 * time.Millisecond

func homePage(c *gin.Context) {
    time.Sleep(simulatedLatency)
    c.Data(http.StatusOK, "text/html; charset=utf-8", page("Home",
        `<h1>Welcome</h1>
        <p>This server admits at most <strong>5 concurrent requests</strong>.</p>
        <p>Run <code>ab -c 30 -n 500 http://localhost:8080/about</code> and watch the logs.</p>
        <nav><a href="/about">About</a> · <a href="/pricing">Pricing</a> · <a href="/contact">Contact</a></nav>`,
    ))
}

func aboutPage(c *gin.Context) {
    time.Sleep(simulatedLatency)
    c.Data(http.StatusOK, "text/html; charset=utf-8", page("About",
        `<h1>About</h1>
        <p>Built with <strong>room</strong>, <strong>figtree</strong>, and <strong>verbose</strong>.</p>
        <a href="/">← Home</a>`,
    ))
}

func pricingPage(c *gin.Context) {
    time.Sleep(simulatedLatency)
    c.Data(http.StatusOK, "text/html; charset=utf-8", page("Pricing",
        `<h1>Pricing</h1>
        <p>Skip the line: <strong>$2.50/position</strong> + 90-minute VIP pass.</p>
        <a href="/">← Home</a>`,
    ))
}

func contactPage(c *gin.Context) {
    time.Sleep(simulatedLatency)
    c.Data(http.StatusOK, "text/html; charset=utf-8", page("Contact",
        `<h1>Contact</h1><p>hello@example.com</p><a href="/">← Home</a>`,
    ))
}

// page wraps a body fragment in a complete HTML document.
func page(title, body string) []byte {
    return []byte(`<!DOCTYPE html><html lang="en"><head>
    <meta charset="UTF-8"><title>` + title + `</title>
    <style>body{font-family:system-ui,sans-serif;max-width:700px;margin:4rem auto;padding:0 1.5rem}
    h1{margin-bottom:1rem}p{margin-bottom:1rem}code{background:#f0f0f0;padding:.1em .4em;border-radius:3px}
    a{color:#6c8ef5}</style></head><body>` + body + `</body></html>`)
}

// purchasePage renders the skip-the-line confirmation page.
func purchasePage(cost float64, passDur time.Duration) []byte {
    passNote := ""
    if passDur > 0 {
        passNote = fmt.Sprintf(
            `<p>Includes a <strong>%s VIP pass</strong> — re-enter the queue anytime during that window and skip for free.</p>`,
            passDur.Round(time.Minute))
    }
    return []byte(fmt.Sprintf(`<!DOCTYPE html><html><head><meta charset="UTF-8">
    <title>Skip the line</title></head><body style="font-family:system-ui;max-width:500px;margin:4rem auto;padding:0 1.5rem">
    <h1>Skip the line — $%.2f</h1>%s
    <form method="POST" action="/queue/purchase/confirm">
      <button type="submit" style="background:#6c8ef5;color:#fff;border:none;padding:.75rem 2rem;border-radius:8px;font-size:1rem;cursor:pointer">
        Confirm payment — $%.2f
      </button>
    </form>
    <p style="margin-top:1rem;font-size:.8rem;color:#666">Demo mode — no real payment processed.</p>
    <a href="/">← Back to waiting room</a>
    </body></html>`, cost, passNote, cost))
}
Enter fullscreen mode Exit fullscreen mode

Add net/http to imports.

Mistake trap: Move verbose.AddSecret to after http.SetCookie. Run the server, issue a key by clicking "Pay to skip", then check logs/basicwebapp.log. Find the POST /queue/purchase/confirm line. The pass token appears in plaintext because verbose did not know about it yet. Move AddSecret back to immediately after PromoteTokenToFront returns, before anything else.


Step 8 — The bash test script

Create test.sh. This script is the pass/fail gate. Exit 0 means every assertion passed — your implementation is correct. It fails fast on the first failed assertion and tells you which step to revisit.

#!/usr/bin/env bash
set -euo pipefail

# ── dependency check ──────────────────────────────────────────────────────────
if ! command -v curl &>/dev/null; then
    echo "ERROR: curl is required."; exit 1
fi
if ! command -v jq &>/dev/null; then
    echo "ERROR: jq is required."
    echo "  macOS: brew install jq"
    echo "  Linux: sudo apt install jq  OR  sudo yum install jq"
    exit 1
fi
if ! command -v ab &>/dev/null; then
    echo "ERROR: apache bench (ab) is required."
    echo "  macOS: brew install httpd"
    echo "  Linux: sudo apt install apache2-utils"
    exit 1
fi

# ── config ────────────────────────────────────────────────────────────────────
BASE="http://127.0.0.1:8080"
LOG_FILE="./logs/basicwebapp.log"

# ── helpers ───────────────────────────────────────────────────────────────────
pass() { echo "  PASS: $1"; }
fail() { echo "  FAIL: $1 — $2"; exit 1; }

assert_log_contains() {
    local label="$1" pattern="$2" step="$3"
    if ! grep -qF "$pattern" "$LOG_FILE" 2>/dev/null; then
        fail "$label" "expected '$pattern' in log — not found (see step $step)"
    fi
    pass "$label"
}

assert_no_plaintext() {
    local label="$1" secret="$2" step="$3"
    if grep -qF "$secret" "$LOG_FILE" 2>/dev/null; then
        fail "$label" "plaintext secret found in log (see step $step)"
    fi
    pass "$label"
}

assert_status() {
    local label="$1" expected="$2" actual="$3" step="$4"
    if [ "$actual" -ne "$expected" ]; then
        fail "$label" "expected HTTP $expected, got $actual (see step $step)"
    fi
    pass "$label"
}

# ── build and start ───────────────────────────────────────────────────────────
echo "=> building..."
go build -o basicwebapp_bin . || { echo "Build failed."; exit 1; }

echo "=> starting server..."
./basicwebapp_bin &
SERVER_PID=$!
trap 'kill $SERVER_PID 2>/dev/null; rm -f basicwebapp_bin' EXIT

for i in $(seq 1 20); do
    if curl -sf "$BASE/" &>/dev/null; then break; fi
    sleep 0.5
done

echo ""
echo "── assertion 1: server started and log file exists (Step 4) ─────────────"
if [ ! -f "$LOG_FILE" ]; then
    fail "log file exists" "logs/basicwebapp.log not found — check verbose.NewLogger (see step 4)"
fi
pass "log file exists"

echo ""
echo "── assertion 2: config loaded and logged (Step 2 + 4) ───────────────────"
assert_log_contains "config line in log" "config:" "2"

echo ""
echo "── assertion 3: waiting room activates under load (Step 5 + 6) ──────────"
echo "   running ab -c 30 -n 100 $BASE/about (10s)..."
ab -c 30 -n 100 -t 10 "$BASE/about" &>/dev/null || true
sleep 2
assert_log_contains "[FULL] in log"  "[ FULL   ]"  "6"
assert_log_contains "[QUEUE] in log" "[ QUEUE  ]"  "6"
assert_log_contains "[ENTER] in log" "[ ENTER  ]"  "6"

echo ""
echo "── assertion 4: purchase page accessible (Step 7) ───────────────────────"
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "$BASE/queue/purchase")
assert_status "GET /queue/purchase returns 400" 400 "$STATUS" "7"

echo ""
echo "── assertion 5: VIP pass token scrubbed from log (Step 7) ──────────────"
COOKIE_JAR=$(mktemp)
for i in $(seq 1 6); do
    curl -s -o /dev/null "$BASE/about" &
done
sleep 0.3
curl -s -o /dev/null -c "$COOKIE_JAR" "$BASE/about" || true
TICKET=$(grep "room_ticket" "$COOKIE_JAR" | awk '{print $NF}' || true)

if [ -n "$TICKET" ]; then
    PASS_TOKEN=$(curl -s -b "$COOKIE_JAR" -c "$COOKIE_JAR" \
        -X POST "$BASE/queue/purchase/confirm" \
        -D - -o /dev/null 2>/dev/null \
        | grep -i "room_pass" | grep -oP 'room_pass=\K[^;]+' || true)

    if [ -n "$PASS_TOKEN" ]; then
        assert_no_plaintext "VIP pass token not in log" "$PASS_TOKEN" "7"
        assert_log_contains "[VIP_PASS] marker in log" "[VIP_PASS]" "7"
    else
        pass "VIP pass token not in log (no pass token issued — room may not have been full)"
    fi
else
    pass "VIP pass token check skipped (could not obtain room_ticket cookie)"
fi
rm -f "$COOKIE_JAR"

wait 2>/dev/null || true

echo ""
echo "── assertion 6: ROOM_CAP mutation logged (Step 3) ───────────────────────"
export ROOM_CAP=8
echo "   waiting up to 90s for ROOM_CAP mutation to appear in log..."
for i in $(seq 1 18); do
    if grep -qF "room-cap" "$LOG_FILE" 2>/dev/null; then
        pass "ROOM_CAP mutation in log"
        break
    fi
    sleep 5
    if [ "$i" -eq 18 ]; then
        fail "ROOM_CAP mutation in log" "mutation not seen after 90s — check Pollinate:true and mutations goroutine (see step 3)"
    fi
done

echo ""
echo "════════════════════════════════════════════════════════════════════════"
echo "  All assertions passed. Your implementation is correct."
echo "════════════════════════════════════════════════════════════════════════"
Enter fullscreen mode Exit fullscreen mode

Make it executable and run it:

chmod +x test.sh
./test.sh
Enter fullscreen mode Exit fullscreen mode

If every assertion passes you see the final banner. If any assertion fails the script exits immediately with the step number to revisit.


Step 9 — Closing: What You Built and What Comes Next

You started with an empty directory. You finished with a four-page Gin web application that admits at most 5 concurrent requests, queues the rest in FIFO order with a live-updating waiting room page, issues paid VIP passes that auto-promote returning clients for 90 minutes, and whose log file is provably safe to hand to a support engineer. Run ./test.sh one final time and watch every assertion pass.

Take a moment to understand what each package actually did.

room is not rate limiting. Rate limiting drops requests or returns 429. room keeps them — every request gets a ticket, sees its position, and is admitted automatically when a slot opens. The reaper cleans up abandoned clients so ghost tickets never stall the queue. The promotion system with PromoteTokenToFront and GrantPass is a complete commercial primitive: you can charge per position, issue time-limited passes, and let the middleware handle the rest. Your handlers never change — they see normal requests arriving at the rate you configured.

figtree is not a flag parser. It is a priority-ordered configuration resolver with live mutation tracking. The moment ROOM_CAP changed in the environment, wr.SetCap was called — no restart, no dropped requests, no intervention. In production that is how you respond to traffic spikes: your autoscaler updates the env var, figtree detects it, the room expands. The RateFunc reading from *figs.Float64(kRatePerPosition) on every call means your skip-the-line pricing can change live too.

verbose is not just a logger. The moment PromoteTokenToFront returned a PassToken, that token's SHA-512 digest entered verbose's registry and every subsequent log line is scanned against it before it touches disk. The plaintext never persists. The rule you followed — log metadata, never the value or any substring — is the correct pattern for any secret: verbose is a safety net, not a substitute for keeping secrets out of log lines in the first place.


The packages used in this tutorial are part of a larger body of open source work

room, figtree, and verbose were all written by Andrei Merlescu@andreimerlescu on GitHub. His profile carries 99 public repositories built across 17 years of professional software engineering spanning Cisco, Oracle, Warner Bros. Games, and SurgePays.

Other packages worth exploring alongside the three you just used:

  • sema — the semaphore that backs room. Dynamic resizing, EWMA utilization tracking, context cancellation, drain/reset for maintenance windows. Zero-allocation hot path.
  • checkfs — filesystem existence and permission checks. Pairs naturally with figtree when you want to validate that a config-supplied path actually exists before your server starts.
  • lemmings — load testing built around the concept of NPCs moving through your infrastructure as simulated users across geographic terrains and pack sizes. If you want to know what your waiting room does under real traffic before you ship it, lemmings is how you find out.

Top comments (0)