Recently, I was working on a microservice that used Postgres as its database. We had great unit test coverage in the project, and everything seemed fine. We did our first deployment and shared the Swagger with the frontend team, and they started integrating it with our service.
That’s when we discovered some bugs related to how our tables were structured and how their relationships behaved in real scenarios.
It became clear that only unit tests weren’t enough. We needed integration tests that exercised the actual database to be able to identify these issues earlier.
So I decided to write integration tests that would run during our CI pipeline.
Since we already had a Dockerfile to run Postgres and Flyway locally, I just extended our pipeline to install Docker Compose on the runner machine and execute the database using our existing setup.
before_script:
- apk add --no-cache make build-base git docker docker-compose g++ curl
- docker-compose -f ./build/docker/localstack/docker-compose.yml up -d
It worked as expected, and the tests passed.
But there was one major downside: test execution time ballooned. Our unit tests used to take just 2 minutes, but now take 8 minutes, largely due to the overhead of installing the necessary tools, then downloading the images and starting them with Docker Compose, and waiting for everything to be ready.
Some time has passed, and with every new integration test, the CI pipeline has become slower.
We could persist the docker:dind volume to prevent the image pull for every job, however, we would still have to wait for the tools installation and for the Docker images to be running before starting our tests.
Integration tests are essential, but not at the cost of a frustrating developer experience.
That’s when I found Testcontainers, which is a Go library that allows you to spin up real containers (like Postgres, Redis, etc.) directly from your tests. You don’t need to rely on an external Docker Compose file or start/stop the services manually, everything is handled in the code.
Setting up Testcontainers with Go
Since we used Flyway to manage the migration files, with testcontainers, we could use the same migration files as initscripts, so we didn't even need a Flyway container to start our PostgreSQL.
configurationFiles, err := filepath.Glob("../../scripts/*.sql")
assert.NoError(nil, err)
migrationFiles, err := filepath.Glob("../../migration/flyway/sql/*.sql")
assert.NoError(nil, err)
migration := append(configurationFiles, migrationFiles...)
container, err := postgres.Run(ctx,
"docker.io/postgres:14",
postgres.WithDatabase(dbName),
postgres.WithUsername(dbUser),
postgres.WithPassword(dbPassword),
testcontainers.WithWaitStrategy(
wait.ForLog("database system is ready to accept connections").WithOccurrence(2),
wait.ForExposedPort(),
),
postgres.WithInitScripts(migration...),
)
assert.NoError(t, err)
err = container.Start(ctx)
assert.NoError(t, err)
Now, in my integration tests, I just need to use the container in my unit tests, and they will connect to the running container of the Testcontainer.
Since our tests are designed to run in parallel, while our containers are starting, the other tests are running, which eliminates the need to explicitly manage container startup and teardown in the test lifecycle.
An added benefit is that Testcontainers uses the same Docker daemon from our docker:dind, which means images are cached and reused across test runs, without any tweaks in our pipeline configuration.
As a result, our total test execution time dropped back down to around 2 minutes, with all the benefits of real integration tests, and none of the overhead from our previous Docker Compose setup.
Our pipeline was fast again, the project was deployed to production, and development of new features resumed smoothly. However, as the number of Flyway migration files grew, we encountered a new issue.
Since we weren’t using a dedicated Flyway container to manage the schema creation, just relying on passing SQL scripts as initScript
into the Postgres container, we began to see problems with the execution order of the migration files. Specifically, some scripts were running out of sequence, which caused inconsistencies in the database schema during tests.
I even tried to manually reorder the migration files before passing them to the container’s entrypoint, but the container seemed to reprocess them in its own order, ignoring the changes.
func extractVersion(filename string) int {
base := filepath.Base(filename)
version := regexp.MustCompile(`V(\d+)__`).FindStringSubmatch(base)
if len(version) != 2 {
return 0
}
v, _ := strconv.Atoi(version[1])
return v
}
...
sort.SliceStable(migration, func(i, j int) bool {
return extractVersion(migration[i]) < extractVersion(migration[j])
})
So, to fix this issue with the order of the migration files, I configured a Flyway container (with testcontainers-go-flyway) to run alongside the Postgres container within a shared network, ensuring that migrations were executed in the correct order before the tests ran.
nw, err := tcnetwork.New(ctx)
assert.NoError(t, err)
configurationFiles, err := filepath.Glob("../../scripts/*.sql")
assert.NoError(t, err)
container, err := postgres.Run(ctx,
"docker.io/postgres:14",
postgres.WithDatabase(dbName),
postgres.WithUsername(dbUser),
postgres.WithPassword(dbPassword),
tcnetwork.WithNetwork([]string{dbSrv}, nw),
testcontainers.WithWaitStrategy(
wait.ForLog("database system is ready to accept connections").WithOccurrence(2),
wait.ForExposedPort(),
),
postgres.WithInitScripts(configurationFiles...),
)
assert.NoError(t, err)
err = container.Start(ctx)
assert.NoError(t, err)
flywayContainer, err := flyway.RunContainer(ctx,
"flyway/flyway:10",
tcnetwork.WithNetwork([]string{"flyway"}, nw),
flyway.WithDatabaseUrl(fmt.Sprintf("jdbc:postgresql://%s:%d/%s?sslmode=disable", dbSrv, dbPort, dbName)),
flyway.WithUser(dbUser),
flyway.WithPassword(dbPassword),
flyway.WithMigrations("../../migration/flyway/sql"),
)
assert.NoError(t, err, "failed to run container")
Benefits
Faster feedback loop: Since containers are spun up only for the test run and destroyed right after, there’s no leftover state or need to clean data manually.
No dependency on external services in CI: We removed the need for a global Docker Compose service in our pipeline.
Better test isolation: Each test file can have its own DB instance if needed.
Tips
- Use context timeouts to avoid hanging containers in CI.
- Prefer a shared container (use a suite if you are using testify) if you’re running many tests and don’t need strict isolation.
Final thoughts
Testcontainers helped us bridge the gap between fast unit tests and realistic integration tests, without sacrificing our pipeline speed. If you’re working with databases or external services in Go, this library is worth exploring.
Top comments (0)