I built testcontainers for Gleam and the name was already taken. Twice.
Let me start with the name, because it tells you everything.
The canonical testcontainers library for the JVM ecosystem is called testcontainers. There is also one for Elixir, and it is called testcontainers. Then someone built a Gleam wrapper around the Elixir one and published it as testcontainers_gleam. By the time I showed up wanting to build something native for Gleam, both testcontainers and testcontainers_gleam were gone.
So the library is called testcontainer. Singular. Not because I think one container is enough. Because I arrived third.
So, why build it at all
testcontainers_gleam is not a bad library. I want to be clear about that. It does what it says. It wraps the Elixir Testcontainers implementation and exposes it to Gleam code. If you are already in a mixed Elixir/Gleam project where that Elixir dep is sitting in your tree anyway, it is a completely reasonable choice and it probably saves you an afternoon.
But it is a wrapper. It is Elixir-shaped. The API leaks the abstraction underneath. And if you want idiomatic Gleam, typed errors, opaque builders, the use syntax doing its thing... you are fighting against the grain of it.
Gleam deserved something native. Not a translation layer. A library that starts from what Gleam is good at.
That is the itch. I scratched it.
Meet Pago
Every good library deserves a mascot. testcontainer has Pago.
Pago is a paguro, a hermit crab. He carries a Docker container on his shell. He does not complain about it. He just carries it, cleans it up when the test is done, and goes home.
That is the philosophy in one image. The container lifecycle is something your tests should carry without thinking about, not something they should wrestle with. You declare what you need, you write your assertions, you close the use block, and Pago handles the rest.
Let's start: How it works
The core API is a use block. You describe a container with a builder, you start it with with_container, and it is gone when the block ends. Automatic cleanup, even if your test process crashes, because a linked guard process is watching in the background.
import testcontainer
import testcontainer/container
import testcontainer/port
import testcontainer/wait
pub fn redis_test() {
use redis <- testcontainer.with_container(
container.new("redis:7-alpine")
|> container.expose_port(port.tcp(6379))
|> container.wait_for(wait.log("Ready to accept connections")),
)
let assert Ok(host_port) = container.host_port(redis, port.tcp(6379))
// connect to 127.0.0.1:host_port
Ok(Nil)
}
A few things worth pointing out here. Ports are typed: port.tcp/1 and port.udp/1 are different things. The builder is opaque, so you cannot pass a half-constructed ContainerSpec somewhere it does not belong. Wait strategies are composable. Errors always carry context.
The library also talks to Docker over the Unix socket directly via gen_tcp, no HTTP client pulled in as a dependency. Fast. Lightweight. No surprises in your dep tree.
Formulas: the part that actually excited me
Here is where things get interesting.
A raw container gives you a running process and a mapped port. If you are starting Postgres, that means you get back a host and a port number. And then every single test file has to reassemble a connection URL from scratch: host, port, user, password, database, driver prefix. It is noise. It is copy-pasted noise.
The solution is what I call a Formula.
pub opaque type Formula(output)
A Formula(output) is two things combined: a ContainerSpec that describes how to start the container, and an extraction function that takes the running Container and produces a typed output. When you call with_formula, the library starts the container, runs the extraction, and hands your test body a fully typed service record.
import testcontainer
import testcontainer_formulas/postgres
pub fn user_repository_test() {
use pg <- testcontainer.with_formula(
postgres.new()
|> postgres.with_database("myapp_test")
|> postgres.with_password("secret")
|> postgres.formula(),
)
// pg is a PostgresContainer
// pg.connection_url, pg.host, pg.port: all there, already built
Ok(Nil)
}
The output type parameter is the interesting bit. Formula(PostgresContainer) and Formula(RedisContainer) are different types. The compiler knows. You cannot accidentally pass one where the other is expected. No runtime surprise, no casting, just the type system doing its job.
The extraction function is a small contract:
pub fn new(
spec: container.ContainerSpec,
extract: fn(container.Container) -> Result(output, error.Error),
) -> Formula(output)
You get a running container. You return your typed output or an error. That is the entire surface of the abstraction.
Why formulas live in a separate package
The core library defines Formula(output) and with_formula. That is all it knows. It has no idea what Postgres is.
The actual formulas live in testcontainer_formulas, a companion package that ships with:
testcontainer_formulas/postgrestestcontainer_formulas/redistestcontainer_formulas/mysqltestcontainer_formulas/rabbitmqtestcontainer_formulas/mongo
The separation is intentional, and the reason is not packaging convenience. It is about the community.
The formula surface is small enough that anyone can write one. The pattern is clear. You define a config type, a builder pipeline, an extraction function, and you are done. That means if you need Kafka, Elasticsearch, LocalStack, a very specific internal service, something completely bizarre, you can open a PR against testcontainer_formulas and it just fits. No changes needed to the core. No coordination required with me. The abstraction holds.
I want testcontainer_formulas to grow into a community-curated archive. Official formulas, opinionated formulas, weird ones. The contract is small enough that this is realistic. If you have an idea for one, open a PR. That is what the repository is for.
The Formula Builder
There is a third piece: testcontainer_formulas_builder.
It is a block-based visual tool. You add blocks for the services you need (Postgres, Redis, MySQL, RabbitMQ, MongoDB, or a custom module), configure each one, set up the shared network if needed, and it generates the Gleam code as you go. You can copy it directly with y in Vim navigation mode... So, YES There is also a vim navigation mode, because of course there is.
There is a live version you can try right now, tagged as experimental: lupodevelop.github.io/testcontainer_formulas_builder
It is aimed at people who want to get a working formula snippet without reading all the docs first, or who want to understand the structure before writing one from scratch. Either way it lowers the barrier for contribution, which is the whole point.
Multi-container setups
For integration tests that need multiple services talking to each other, there are networks and stacks.
use net <- testcontainer.with_stack(
testcontainer.stack("app-test-net", fn(n) { Ok(n) }),
)
use pg <- testcontainer.with_formula(
postgres.new()
|> postgres.on_network(net)
|> postgres.formula(),
)
// pg and any other container share the same Docker network
Stacks own the network lifecycle. Each container inside still gets its own guard process, so teardown is ordered and nothing leaks.
Get started
gleam add testcontainer
gleam add testcontainer_formulas
- Core library: hex.pm/packages/testcontainer
- Formulas: hex.pm/packages/testcontainer_formulas
- Formula Builder: github.com/lupodevelop/testcontainer_formulas_builder
- Docs: hexdocs.pm/testcontainer
If you write a formula for something not in the package yet, open a PR. That is exactly what testcontainer_formulas is there for.
Pago is watching. Pago approves.
So if you want to contribute, my Ko-fi is waiting for you.

Top comments (0)