- Book: Thinking in Go (2-book series) — Complete Guide to Go Programming + Hexagonal Architecture in Go · Ebook from Apr 22
- Also by me: Observability for LLM Applications
- My project: Hermes IDE | GitHub — an IDE for developers who ship with Claude Code and other AI coding tools
- Me: xgabriel.com | GitHub
Your Express app runs on one thread. Your Fastify app runs on one thread. Your clever async/await composition runs on one thread. Go is the first language where "one function, one goroutine" is not a lie, and that changes how you design the next service.
The pitch you keep hearing in 2026 is that a lot of the AI serving layer, the agent runtimes, the gateways, the observability plumbing, is being written in Go. Teams that were all-in on Node for ten years are spinning up a second service and it is not a Node service. Before you follow them, it helps to know what actually carries over from your Express muscle memory and what does not.
This post is the side-by-side you wish someone had handed you. Runnable code in both languages. Which Node patterns translate, which do not, and what Node still does better in April 2026, with Bun and Deno firmly in the mix.
flowchart LR
subgraph NODE["Node.js single thread"]
Q[Task queue] --> L[Event loop]
L --> C{CPU task?}
C -->|yes| B[Blocks everything]
C -->|no| I[libuv I/O]
I --> Q
end
subgraph GO["Go M:N scheduler"]
R[Requests] --> GR[N goroutines]
GR --> M[M OS threads]
M --> CPU[All cores utilized]
end
The runtime model, in one paragraph
Node runs a single-threaded event loop. One CPU core, one JavaScript thread, one callback queue, cooperative yielding on every await. You scale horizontally by running more processes, usually under PM2 or inside a container orchestrator. Libuv handles I/O on a thread pool under the hood, but your code sees one thread.
Go runs a pool of OS threads, by default equal to GOMAXPROCS which defaults to the number of cores on the box. The runtime schedules goroutines onto those threads. A goroutine is a few kilobytes of stack, not an OS thread, and the runtime can park millions of them. When a goroutine blocks on I/O, the scheduler moves the OS thread to a different goroutine. You do not write an event loop. You do not paint async on every function. You write synchronous-looking code that happens to be concurrent.
That one difference cascades through every other comparison below.
An Express handler and its Go equivalent
Concrete first. Here is a small Express service that fetches a user, fans out two upstream calls in parallel, combines them, and returns JSON.
// server.js — Express
import express from "express";
const app = express();
async function getUser(id) {
const r = await fetch(`https://api.internal/users/${id}`);
if (!r.ok) throw new Error(`user ${id}: ${r.status}`);
return r.json();
}
async function getOrders(id) {
const r = await fetch(`https://api.internal/orders?user=${id}`);
if (!r.ok) throw new Error(`orders ${id}: ${r.status}`);
return r.json();
}
app.get("/profile/:id", async (req, res, next) => {
try {
const [user, orders] = await Promise.all([
getUser(req.params.id),
getOrders(req.params.id),
]);
res.json({ user, orders });
} catch (err) {
next(err);
}
});
app.listen(3000);
Now the same shape in Go with net/http and chi. This compiles on Go 1.22+.
// main.go — Go
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"github.com/go-chi/chi/v5"
"golang.org/x/sync/errgroup"
)
type User struct {
ID string `json:"id"`
Name string `json:"name"`
}
type Order struct {
ID string `json:"id"`
Total float64 `json:"total"`
}
func fetchJSON(ctx context.Context, url string, out any) error {
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode >= 400 {
return fmt.Errorf("%s: %d", url, resp.StatusCode)
}
return json.NewDecoder(resp.Body).Decode(out)
}
func profile(w http.ResponseWriter, r *http.Request) {
id := chi.URLParam(r, "id")
var user User
var orders []Order
g, ctx := errgroup.WithContext(r.Context())
g.Go(func() error {
return fetchJSON(ctx, "https://api.internal/users/"+id, &user)
})
g.Go(func() error {
return fetchJSON(ctx, "https://api.internal/orders?user="+id, &orders)
})
if err := g.Wait(); err != nil {
http.Error(w, err.Error(), http.StatusBadGateway)
return
}
json.NewEncoder(w).Encode(map[string]any{
"user": user,
"orders": orders,
})
}
func main() {
r := chi.NewRouter()
r.Get("/profile/{id}", profile)
http.ListenAndServe(":3000", r)
}
A few things worth looking at before moving on. The Go handler never awaits. It calls g.Wait() and blocks the goroutine, which is cheap. Each request runs on its own goroutine that the runtime schedules. There is no res.json() magic method, you marshal yourself. There is no middleware chain that assumes a central app object, chi is a thin router that composes http.Handler. The errgroup package is the Go answer to Promise.all, and it cancels the shared context if any goroutine returns an error.
That last detail is the one a lot of Node developers miss for the first week. Cancellation in Go is a first-class value you pass down. Cancellation in Node is a patchwork of AbortController support that varies by library.
Dependency management, shaped by philosophy
package.json lists a few direct dependencies and node_modules ends up with four hundred transitive ones. That is not a moral failure of the ecosystem, it is what happens when the stdlib is thin and publishing a package is free. You install express and you get Connect-derived middleware, body parsers, cookie libraries, and a dozen small utilities that each do one thing.
go.mod lists a few direct dependencies too. The difference is what you reach for them for. The Go standard library ships production-grade HTTP server and client (net/http), JSON codec (encoding/json), crypto, templating, compression, SQL driver interface, HTTP/2, TLS 1.3, and context cancellation. You do not need a package for reading a JSON body. You do not need a package for routing until you want fancy routing, which is when chi or gorilla/mux shows up as one dependency, not forty.
The practical effect. A typical Go web service has 5 to 20 direct dependencies and maybe 80 transitive ones. A typical Node web service has 20 to 50 direct dependencies and somewhere between 800 and 1500 transitive ones. The supply-chain surface is different by an order of magnitude, and so is the audit effort when a security advisory lands.
Go modules use minimum version selection, not the range-based resolution npm does. You pin a version in go.mod, you get that version, and upgrades are explicit. go mod tidy prunes what you do not use. go.sum is checksum-locked. Reproducible builds are the default, not a configuration.
Types: TypeScript is not Go's static typing
You already use TypeScript. You already think of yourself as a static-types person. Go will still feel different, because TypeScript types erase at runtime and Go types do not.
TypeScript gives you structural typing, unions, generics, conditional types, template literal types, inference that feels like magic. The type checker is a separate program that runs before tsc strips everything out. At runtime, you have JavaScript. A shape that passed the compiler can still be wrong if it arrived from the wire.
Go gives you nominal typing, interfaces satisfied structurally, generics since 1.18, no union types, no exhaustive pattern matching. The type information survives to runtime through reflection. A struct is a layout in memory, not a compile-time ghost. When you decode JSON into a struct, the field tags drive it, and missing fields get zero values unless you opt into strictness.
Two practical consequences. First, you will miss TypeScript's narrowing. Go's answer is usually type assertions and type switches, which are verbose by comparison. Second, you will stop writing any by accident, because Go makes any explicit and awkward, which is a feature.
Error handling is the biggest adjustment
In Node you throw, you catch, you let errors bubble to an Express error middleware. You rely on try/catch around awaits. Unhandled promise rejections either crash the process or get swallowed depending on the Node version.
In Go, functions that can fail return two values, the result and an error. You check it explicitly every time.
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("fetch %s: %w", url, err)
}
defer resp.Body.Close()
Every. Time. It looks repetitive on first read. It is repetitive on first read. The thing it buys you is that every error path is on the page. No invisible stack unwinding, no middleware catching a thing you forgot about, no mystery about what happens when a function fails. The %w verb in fmt.Errorf wraps the error so errors.Is and errors.As can walk the chain later.
Panics exist. They are for unrecoverable situations like nil pointer dereferences or index out of bounds. You do not use them for regular error flow. Treat them the way you treat process.exit(1) in Node, not the way you treat throw.
A Node developer migrating in one afternoon will write a lot of if err != nil { return err } and feel like they are retyping the same line. Two weeks in, the benefit clicks. When a production incident lands, you can read a function and know what it does when things break, because it is on the screen.
Concurrency primitives, side by side
Your Node toolkit is Promise.all, Promise.allSettled, Promise.race, p-limit for concurrency caps, AbortController for cancellation. It works. It is all on one thread.
Go's toolkit maps cleanly but runs on real threads.
-
Promise.all(tasks)becomeserrgroup.Groupwithg.Go(...)andg.Wait(). Cancels siblings on first error. -
Promise.allSettledis a plainsync.WaitGroupplus a slice you write results into. -
Promise.raceis aselectstatement on two or more channels with acontexttimeout. -
p-limit(10)is a buffered channel used as a semaphore, orerrgroup.SetLimit(10). -
AbortControlleriscontext.WithCancelorcontext.WithTimeout. Pass the context down and every blocking call respects it.
Here is what a capped-parallelism fan-out looks like.
g, ctx := errgroup.WithContext(r.Context())
g.SetLimit(10)
for _, id := range ids {
id := id
g.Go(func() error {
return processOne(ctx, id)
})
}
if err := g.Wait(); err != nil {
return err
}
g.SetLimit(10) caps active goroutines at ten. The loop queues the rest. You get bounded concurrency without reaching for a library.
The five Node patterns that do not translate
Some of your reflexes will actively hurt you in Go. The big five.
-
Middleware-as-stack of
(req, res, next). Go's HTTP model ishttp.Handlercomposition. Middleware wraps a handler and returns a handler. Nonext()call.chi'sUsetakes handler wrappers. Stop threadingnext. -
Global monkey-patching for instrumentation. You cannot wrap
fetchglobally the way you wrap it in Node. Instrumentation in Go is explicit, usually via anhttp.RoundTripperor a wrappedhttp.Clientyou inject. OpenTelemetry's Go SDK does this cleanly. It is more typing, and it is inspectable. -
JSON.parseeverywhere with implicit shapes. Decoding intomap[string]anyis legal and miserable. Define a struct. Let the compiler help. If the shape is genuinely dynamic,json.RawMessagelets you defer the decision without losing your mind. -
Singletons via module side effects.
import "./db.js"running a connection setup on import is a Node idiom. In Go,init()functions exist but they are discouraged for anything with side effects. Wire your dependencies inmain()and pass them down. This is the same pattern hexagonal architecture has been telling you about for years. -
Hot reload on every save as part of the dev loop.
nodemonand Bun's--hotkeep state across reloads. Go's answer isairorreflex, and they restart the process. A restart on a Go service is fast, often under a second, but it is a restart. You lose in-memory state every time. Design for that.
What Node still does better in April 2026
This is not a one-way flight. If you are reading this in 2026, Node's position is stronger than the Go-is-winning posts suggest, in specific places.
Ecosystem breadth. npm has more than two million packages. Some of them are junk, most of them are not relevant, and a few hundred are the only game in town for things like PDF manipulation, SDKs for niche SaaS APIs, and browser automation. Go's module ecosystem is smaller by an order of magnitude. For a backend that does the normal things, Go has enough. For a backend that does a weird thing, you may find there is no library and you write it yourself.
Developer experience in the edit loop. Bun's bun --hot, Deno's --watch, and Node's own watch mode preserve process state on reload. Your database connection pool stays warm. Your in-memory cache survives. The feedback loop is tight. Go restarts the process. For a heavy service with long startup, that matters.
Shared code with the frontend. A TypeScript monorepo where the backend and frontend share types, validation schemas (Zod), and utility functions is a real productivity win. Go cannot give you that. You will define the request shape twice.
Scripting and glue. For a quick script, a cron job, a CLI that shells out to three tools and formats the output, Node is faster to write than Go. Go's verbosity is a tax that pays back at scale. For a 50-line script, the tax is pure cost.
Bun and Deno are real in 2026. Bun's HTTP server benchmarks close the gap with Go for simple workloads, and Bun's built-in test runner, bundler, and package manager make the toolchain story competitive. Deno's permission model and built-in TypeScript support are what Node-next-generation should have been. If your ceiling is a fast single-box service and you love TypeScript, picking Bun over Go is a defensible call in April 2026.
When Go is the right move
Go earns its spot when the workload is network I/O heavy, long-lived, concurrent, and deployed as a compiled binary. Gateways. Proxies. Agent runtimes. Observability collectors. Anything that holds ten thousand connections and shuffles bytes between them. Anything where memory per instance and cold-start time land on a cost report.
Go is not the right move for a batch script, a CLI glue tool, or a service whose hardest constraint is shared types with a React frontend. Those are still Node shapes.
The migration is not whole-codebase. It is per-service. Pick one service that bleeds when scaled, port it, keep the rest on Node. That is how most of the teams you read about in 2026 actually did it.
flowchart LR
subgraph NPM["node_modules/"]
A[express] --> A1[qs]
A --> A2[body-parser]
A2 --> A3[bytes]
A2 --> A4[content-type]
A --> A5[send]
A5 --> A6[mime-types]
A --> A7[serve-static]
end
subgraph GOSTD["Go stdlib + 2 deps"]
N[net/http] --> J[encoding/json]
N --> SL[log/slog]
N --> E[chi router]
end
If this was useful
Thinking in Go is a two-book series written for exactly this move. The first book teaches the language the way a backend developer wants to learn it, from the runtime model down. The second book shows you how to structure a real service once you have the syntax, using hexagonal architecture as the spine. The code compiles. The examples are services, not toys.
The observability book goes one layer up, into how you instrument and debug Go services once they are in production, with OpenTelemetry GenAI semantic conventions for AI backends specifically.
- Thinking in Go (2-book series): Complete Guide to Go Programming + Hexagonal Architecture in Go
- Observability for LLM Applications: Amazon
- Hermes IDE: hermes-ide.com | GitHub — an IDE for developers who ship with Claude Code and other AI coding tools
- Me: xgabriel.com | GitHub


Top comments (0)