(This is Part 3 of the series. Read Part 2 here: https://dev.to/baldrvivaldelli/building-an-effect-runtime-in-typescript-part-2-integrating-the-javascript-ecosystem-like-lego-48g7)
Why HTTP deserves its own layer
In Part 2 we focused on integrating the JavaScript ecosystem into a pure, deterministic effect runtime:
fibers, schedulers, scopes, abortable promises.
HTTP is the perfect stress test for that design.
Why?
Because HTTP mixes:
- async boundaries (
fetch) - cancellation (
AbortController) - structured resource lifetimes
- user‑facing ergonomics (DX)
- and a lot of accidental complexity
In this post we’ll build brass-http, a small ZIO‑inspired HTTP layer on top of brass-runtime, and explain:
- how wire‑level effects stay pure
- how content decoding is layered
- how DX helpers are built without leaking semantics
- what’s coming next
Design goals
Before writing code, we locked in a few constraints:
No Promises in semantics
HTTP must be modeled asAsync<R, E, A>, notPromise<A>.-
Separation of concerns
- Wire protocol (status, headers, bytes)
- Content decoding (text, json)
- Metadata (timings, proxy info, TLS, etc.)
ZIO‑style middleware
Everything should be composable:
withMeta
withLogging
withRetry
withTracing
- Lazy + interruptible No request starts until the fiber runs. Interrupting a fiber aborts the HTTP request.
The core abstraction: Request → Async
At the lowest level, HttpClient is just a function:
type HttpClient = (req: HttpRequest) =>
Async<unknown, HttpError, HttpWireResponse>
No helpers. No JSON. No DX.
This mirrors ZIO exactly:
services are functions, not objects
HttpWireResponse
type HttpWireResponse = {
status: number
statusText: string
headers: Record<string, string>
bodyText: string
ms: number
}
This is the wire layer.
Nothing here assumes JSON, text, or domain models.
Abortable fetch (the right way)
The runtime already provides fromPromiseAbortable, so HTTP integrates cleanly:
fromPromiseAbortable(signal =>
fetch(url, { signal })
)
Interrupt the fiber → AbortController fires → fetch cancels.
No leaks. No races. Deterministic.
Layering content: decoding is not transport
Instead of baking JSON into the client, decoding is layered on top:
type HttpResponse<A> = {
status: number
headers: Record<string, string>
body: A
}
Helpers like:
getText(): Async<_, HttpError, HttpResponse<string>>
getJson<A>(): Async<_, HttpError, HttpResponse<A>>
are implemented as pure mappings over the wire response.
This means:
- JSON parsing errors are normal effect failures
- you can swap decoders freely
- the wire client stays reusable
Metadata as middleware (not pollution)
Originally, responses carried metadata inline:
meta: { statusText, ms }
This was a mistake.
Instead, metadata is a middleware:
withMeta(client)
This mirrors ZIO:
- environments add capabilities
- middleware enriches effects
- the core stays minimal
Want tracing headers? TLS info? Proxy hops?
→ add middleware, don’t change types.
DX without semantic leaks
Users don’t want to build HttpRequest by hand.
So we provide a thin DX layer:
http.get("/posts/1")
http.postJson("/posts", body)
Internally, this:
- builds a request
- calls the core client
- applies decoding middleware
But critically:
DX helpers are just functions, not magic
No hidden execution. No eager fetches.
Example usage
const http = httpClientBuilder({
baseUrl: "https://jsonplaceholder.typicode.com"
})
const post = await toPromise(
http.getJson<Post>("/posts/1"),
{}
)
console.log(post.body.title)
- lazy
- interruptible
- deterministic
- zero Promises in the core
What this unlocked
With this design, we get:
- Retry policies
- Timeouts
- Logging
- Metrics
- Tracing
- Test doubles
All as middleware, not rewrites.
What’s coming next
brass-http
- streaming request / response bodies
- retry & backoff policies
- interceptors
- test client
- HTTP/2 experiments
brass-core
- supervisors
- fiber dumps
- structured logging
- better scheduler instrumentation
- channels & sinks
Final thoughts
brass-http isn’t about replacing Axios or fetch.
It’s about answering a question:
What does HTTP look like when async, cancellation, and resources are first‑class?
The answer is surprisingly small — and very powerful.
If you enjoyed this, Part 4 will dive into streams over HTTP and backpressure.
Thanks for reading 🙏

Top comments (0)