We recently made a push to increase our API test coverage. We’re a Go shop here at Nuon, and use the Gin web framework along with GORM for our APIs, and Temporal to execute workflows. While we had a good number of unit tests, we felt we really needed integration tests. We wanted these tests to exercise our API endpoints and also verify the data was saved correctly while still mocking out calls to other systems, like Temporal. If an API creates a Temporal workflow, we wanted a separate test to verify the workflow logic was correct on the other end. Fx makes it easy for us to manage our app dependencies and decouple these tests.
What Is Fx?
Fx is a dependency injection (DI) framework for Go, developed at Uber. At its core, it simplifies the problem of wiring dependencies together. Instead of manually constructing and passing dependencies through every layer of your application, you map types to constructors. When something depends on a type, Fx figures out how to provide it and injects it automatically. This is especially valuable with deeply nested structures. Without DI, threading a new dependency through multiple layers of "Russian doll" constructors quickly becomes repetitive. As our own dependency graph grew, Fx enabled us to group related dependencies into domain-specific modules.
How We Wire Our API With Uber Fx
Our control plane API (ctl-api) is a Gin web service built with Temporal. Gin handles HTTP routing and middleware, while Temporal orchestrates async workflows.
- 15 domain service packages (accounts, apps, components, installs, orgs, runners, actions, and more), each with its own HTTP handlers, helpers, and Temporal workers
- ~480 HTTP endpoint routes spread across 5 API servers (public, runner, internal, auth, admin dashboard)
- 7 Temporal worker namespaces handling async workflows for every domain
- 28 HTTP middlewares (auth, CORS, pagination, audit logging, tracing, chaos testing, etc.)
- 25+ infrastructure providers (PostgreSQL, ClickHouse, Temporal, S3, GitHub, Terraform, metrics, and more)
Without DI, wiring this up manually would look something like this:
// Without FX: manual construction and threading of dependencies
cfg := internal.NewConfig()
logger := log.New(cfg)
db := psql.New(validator, logger, metrics, lifecycle, cfg)
temporalClient := temporal.New(cfg, dataConverter)
evClient := eventloop.New(validator, logger, temporalClient, metrics, cfg, db)
tfClient := terraform.New(cfg, logger)
helpers := componentshelpers.New(db, logger, validator)
// ... repeat for every dependency, in the right order
componentsService := componentsservice.New(componentsservice.Params{
V: validator, L: logger, DB: db, MW: metrics, Cfg: cfg,
Helpers: helpers, EvClient: evClient, TfClient: tfClient,
// ... and so on for every field
})
With Fx, each constructor declares what it needs and what it provides. The framework resolves the graph:
fx.New(
fxmodules.InfrastructureModule, // config, DBs, temporal, logging
fxmodules.HelpersModule, // domain-specific shared logic
fxmodules.MiddlewaresModule, // 28 HTTP middlewares
fxmodules.AllServicesModule, // 15 domain services
fxmodules.AllAPIsModule, // 5 API servers
).Run()
No manual ordering, no constructor argument threading. This gives us a single, composable dependency graph that scales as we add more domains.
How Our Codebase Is Organized
Our code follows a domain-driven layout. Each business domain (apps, installs, components, runners, etc.) owns everything it needs: HTTP handlers, business logic helpers, Temporal workers, and signal definitions.
internal/app/{domain}/
├── service/ # HTTP handlers and route registration
├── helpers/ # Shared business logic (cross-domain)
├── worker/ # Temporal workflows and activities
└── signals/ # Event and signal definitions
Alongside the domains, shared infrastructure lives in internal/pkg/:
internal/pkg/
├── db/psql/ # PostgreSQL adapter (wraps GORM)
├── db/ch/ # ClickHouse adapter
├── temporal/ # Temporal client adapter
├── eventloop/ # Async event dispatch adapter
├── terraform/ # Terraform client adapter
├── authz/ # RBAC authorization
├── features/ # Feature flags
└── ... # ~15 more shared packages
We call these shared packages adapters. Each wraps an external dependency behind an interface and exposes a simple New constructor that Fx can resolve. This is a key design decision. The rest of our code never imports a database driver or Temporal SDK directly. It depends on the adapter's interface, and Fx wires in the implementation.
Adapters and How We Wrap Dependencies
Every external dependency follows the same adapter pattern. Take our PostgreSQL connection:
// pkg/db/psql/db.go - The constructor declares its dependencies via function args
func New(v *validator.Validate, l zapgorm2.Logger, mw metrics.Writer,
lc fx.Lifecycle, cfg *internal.Config) (*gorm.DB, error) {
// ... build connection, configure pool ...
db, err := gorm.Open(postgres.New(postgresCfg), gormCfg)
// Lifecycle hooks manage startup/shutdown
lc.Append(fx.Hook{
OnStart: func(_ context.Context) error { database.startPoolBackgroundJob(); return nil },
OnStop: func(_ context.Context) error { database.stopPoolBackgroundJob(); return nil },
})
return db, err
}
// pkg/db/psql/fx.go - The adapter function tags the result for named injection
func AsPSQL(f any) any {
return fx.Annotate(f, fx.ResultTags(`name:"psql"`, `name:"dbs"`))
}
The AsPSQL wrapper is what we call an adapter function. It decorates the constructor with Fx annotations so the resulting *gorm.DB is tagged with a name. This lets us inject multiple databases (PostgreSQL and ClickHouse) into the same graph without collision:
var InfrastructureModule = fx.Module("infrastructure",
fx.Provide(psql.AsPSQL(psql.New)), // provides *gorm.DB named "psql"
fx.Provide(ch.AsCH(ch.New)), // provides *gorm.DB named "ch"
// ...
)
This pattern repeats across all our adapters: AsGzip, AsLargePayload, AsS3Payload, AsService, AsMiddleware, AsWorker. Each is a thin annotation wrapper that maps a constructor into the Fx graph with the right tags.
Fx Lifecycle Hooks
Fx gives every component a lifecycle: OnStart runs when the application boots, OnStop runs during graceful shutdown. This is how we manage background goroutines, connection pools, and worker loops without manual orchestration.
Our database connection pool starts a background token-refresh job on startup and tears it down on stop. Our Temporal workers register themselves into the lifecycle so they shut down gracefully. We don't have a main() function full of defer and go statements; the lifecycle is declarative.
How to Consume Fx Dependencies
Every component that needs dependencies declares a Params struct embedding fx.In. Here's a real domain service:
type Params struct {
fx.In
DB *gorm.DB `name:"psql"` // named injection for PostgreSQL
L *zap.Logger
V *validator.Validate
Helpers *helpers.Helpers
EvClient eventloop.Client // interface, not concrete type
TfClient terraform.Client // interface, not concrete type
}
func New(params Params) *service {
return &service{db: params.DB, l: params.L, evClient: params.EvClient, ...}
}
Notice that EvClient and TfClient are interfaces. The production InfrastructureModule provides real Temporal-backed and Terraform implementations. But nothing stops us from providing a mock instead. That's the entire basis for how our testing works, and why Fx was worth adopting; we can swap out the Postgres dependency, the Temporal client, or any adapter without changing a single line in this code.
Collecting Services with Fx Groups
With 15 domain services, we don't want to manually pass each one to the API server. Fx groups solve this. Each domain service is annotated to join a "services" group, and the API server receives all of them as a slice:
// AsService annotates any constructor to join the "services" group
func AsService(f any) any {
return fx.Annotate(f, fx.As(new(Service)), fx.ResultTags(`group:"services"`))
}
// Registration in services.go
fx.Provide(api.AsService(appsservice.New)),
fx.Provide(api.AsService(componentsservice.New)),
fx.Provide(api.AsService(installsservice.New)),
// ... 12 more domain services
// The API server receives them all
type Params struct {
fx.In
Services []Service `group:"services"`
Middlewares []middlewares.Middleware `group:"middlewares"`
}
The same pattern collects our 28 middlewares and seven Temporal workers into their respective groups.
Composing for Different Deployment Targets
Each deployment mode composes different modules. In development, we run all five API servers in one process:
fx.New(
fxmodules.InfrastructureModule,
fxmodules.HelpersModule,
fxmodules.MiddlewaresModule,
fxmodules.AllServicesModule, // all 15 domain services
fxmodules.AllAPIsModule, // public + runner + internal + auth + admin
).Run()
In production, each API runs independently with only the modules it needs:
fx.New(
fxmodules.InfrastructureModule,
fxmodules.HelpersModule,
fxmodules.MiddlewaresModule,
fxmodules.PublicServicesModule,
fxmodules.PublicAPIModule,
).Run()
This composability extends to workers, too. Each worker namespace gets its own module that bundles activities, workflows, and the worker registration:
var ComponentsWorkerModule = fx.Module("worker-components",
fx.Provide(componentsactivities.New),
fx.Provide(componentsworker.NewWorkflows),
fx.Provide(worker.AsWorker(componentsworker.New)),
)
The Fx container becomes a single source of truth for the entire application. Every dependency, every lifecycle hook, every service and worker is registered in the same graph. That's what makes it possible to build a test-specific version of the same graph, which is where fxtest comes in.
Setting Up fxtest for API Integration Tests
Testing is where the adapter pattern pays off. Because every dependency is behind an interface and registered through the Fx container, fxtest lets us build an alternative dependency graph that swaps specific providers without changing any of the code under test. We keep the real database connections, the real GORM models, the real Gin router, and the real domain logic. We only replace the things that would reach outside our service boundary: Temporal, GitHub, Terraform. The test graph is a near-identical copy of production, with targeted substitutions. This allows us to write focused integration tests with a much smaller part of the system under test. For API tests, we’ll write through to the database, but keep the test boundary before Temporal. We can verify database rows are written and the events to create Temporal workflows are emitted, but not test the temporal workflow themselves (we’ll do that in a separate test focused only on the workflow).
The Fx Test Configuration
We centralized all test Fx wiring into a single function: CtlApiFXOptionsWithMocks. This is the test-side equivalent of our production InfrastructureModule. It registers the same dependencies but replaces external services (Temporal, GitHub, Terraform) with configurable mocks:
func CtlApiFXOptionsWithMocks(opts TestOpts) []fx.Option {
options := []fx.Option{
fx.WithLogger(NopFxLogger),
// Real config, logging, databases
fx.Provide(internal.NewConfig),
fx.Provide(log.New),
fx.Provide(psql.AsPSQL(psql.New)),
fx.Provide(ch.AsCH(ch.New)),
// Domain helpers, fixtures, DB init...
fx.Invoke(db.DBGroupParam(func([]*gorm.DB) {})),
}
// Each mock falls back to a sensible default if not provided
if opts.Mocks != nil && opts.Mocks.MockTC != nil {
options = append(options,
fx.Supply(fx.Annotate(opts.Mocks.MockTC,
fx.As(new(temporalclient.Client)))),
)
} else {
ctrl := gomock.NewController(opts.T)
options = append(options,
fx.Supply(fx.Annotate(temporalclient.NewMockClient(ctrl),
fx.As(new(temporalclient.Client)))),
)
}
// ... same pattern for MockEv, MockTF
return options
}
Real infrastructure, fake externals. Databases, config, logging, helpers, and validation all run for real. Only clients that talk to external services (Temporal, GitHub, Terraform) get swapped.
Smart defaults. If a test doesn't care about a specific mock, sensible defaults are created automatically. A test that's focused on database behavior doesn't need to wire up a Temporal mock. It gets one for free.
fx.Supply + fx.Annotate for mocks. Pre-instantiated mock objects are injected via fx.Supply rather than fx.Provide, and fx.As(new(eventloop.Client)) ensures they satisfy the correct interface.
Test Router
Tests don't run the full API server. Instead, they create a minimal Gin test router with just the middlewares needed for request processing:
type RouterOptions struct {
L *zap.Logger
DB *gorm.DB
TestOrg *app.Org
TestAcc *app.Account
}
func NewTestRouter(opts RouterOptions) *gin.Engine {
router := gin.New()
// Only the middlewares that matter for request processing
router.Use(stderr.New(opts.L, nil).Handler())
router.Use(patcher.New(patcher.Params{}).Handler())
router.Use(pagination.New(pagination.Params{}).Handler())
// Inject test org and account into request context
// (replaces auth middleware that would normally extract from JWT)
router.Use(func(c *gin.Context) {
if opts.TestOrg != nil {
cctx.SetOrgGinContext(c, opts.TestOrg)
}
if opts.TestAcc != nil {
cctx.SetAccountGinContext(c, opts.TestAcc)
}
c.Next()
})
return router
}
This is where authentication gets sidestepped cleanly. Instead of mocking an auth provider, we inject the authenticated user directly into the Gin context, exactly where the auth middleware would normally put it.
Using fxtest.New
With all of this in place, constructing the Fx container in a test is straightforward. fxtest.New validates the entire dependency graph at test startup and wires everything together:
// Build FX options with mocks
options := append(
tests.CtlApiFXOptionsWithMocks(tests.TestOpts{
T: s.T(),
Mocks: &tests.TestMocks{MockEv: s.mockEvClient},
CustomValidator: true,
}),
fx.Provide(New), // service under test
fx.Populate(&s.deps, &s.componentsService), // extract dependencies
)
// fxtest.New validates the graph and starts all lifecycle hooks
s.fxApp = fxtest.New(s.T(), options...)
s.fxApp.RequireStart()
fx.Populate is the bridge between the Fx container and your test struct. It extracts resolved dependencies so your test code can use them directly. The fxtest.App also integrates with testing.TB, so any Fx startup errors become test failures automatically.
How Our Integration Test Suites Are Constructed
With the fxtest infrastructure in place, here's how we construct a complete integration test suite for an API endpoint. We'll use our components service as a concrete example.
Test Suite Structure
Every endpoint test suite follows the same three-layer pattern: a dependency struct, a testify suite, and lifecycle hooks.
// 1. Dependency struct. FX populates this from the container.
type ComponentsTestDeps struct {
fx.In
DB *gorm.DB `name:"psql"`
CHDB *gorm.DB `name:"ch"`
V *validator.Validate
L *zap.Logger
MW metrics.Writer
Seeder *testseed.Seeder
}
// 2. Suite struct. Embeds BaseDBTestSuite for automatic DB lifecycle.
type ComponentsServiceTestSuite struct {
tests.BaseDBTestSuite
fxApp *fxtest.App
deps ComponentsTestDeps
componentsService *service
router *gin.Engine
ctx context.Context
testOrg *app.Org
testAcc *app.Account
testApp *app.App
testAppConfig *app.AppConfig
mockEvClient *tests.MockEventLoopClient
}
// 3. Entry point. Skip unless running integration tests.
func TestComponentsServiceSuite(t *testing.T) {
// The env guard keeps integration tests out of go test. // It will only run in CI or when explicitly requested locally.
if os.Getenv("INTEGRATION") != "true" {
t.Skip("INTEGRATION is not set, skipping")
}
suite.Run(t, new(ComponentsServiceTestSuite))
}
Suite Lifecycle
SetupSuite runs once per test class. It creates the Fx container, starts it, and stores the DB handle. Database creation and migrations are handled separately by our internal testing tool before Go tests are executed.
func (s *ComponentsServiceTestSuite) SetupSuite() {
s.BaseDBTestSuite.SetupSuite()
gin.SetMode(gin.TestMode)
s.mockEvClient = tests.NewMockEventLoopClient()
options := append(
tests.CtlApiFXOptionsWithMocks(tests.TestOpts{
T: s.T(),
Mocks: &tests.TestMocks{MockEv: s.mockEvClient},
CustomValidator: true,
}),
fx.Provide(New),
fx.Populate(&s.deps, &s.componentsService),
)
s.fxApp = fxtest.New(s.T(), options...)
s.fxApp.RequireStart()
s.SetDB(s.deps.DB) // store DB reference for use in tests
}
SetupTest runs before every individual test. It resets mocks, seeds fresh test data, and creates a new router:
func (s *ComponentsServiceTestSuite) SetupTest() {
s.BaseDBTestSuite.SetupTest()
s.mockEvClient.Reset()
s.setupTestData()
s.router = tests.NewTestRouter(tests.RouterOptions{
L: s.deps.L,
DB: s.deps.DB,
TestOrg: s.testOrg,
TestAcc: s.testAcc,
})
err := s.componentsService.RegisterPublicRoutes(s.router)
require.NoError(s.T(), err)
}
// setupTestData uses the FX-injected Seeder to create fresh test entities.
func (s *ComponentsServiceTestSuite) setupTestData() {
s.ctx = context.Background()
s.ctx, s.testAcc = s.deps.Seeder.EnsureAccount(s.ctx, s.T())
s.ctx, s.testOrg = s.deps.Seeder.EnsureOrg(s.ctx, s.T())
s.testApp = s.deps.Seeder.CreateApp(s.ctx, s.T())
s.testAppConfig = s.deps.Seeder.CreateAppConfig(s.ctx, s.T(), s.testApp.ID)
}
Test Data Isolation
Initially, we truncated between tests to ensure no data leaked. This gave us a clean database for every test without the cost of DROP/CREATE cycles.
func TruncateAllTables(ctx context.Context, db *gorm.DB) error {
models := psql.AllModels()
tableNames := make([]string, 0, len(models))
for _, model := range models {
stmt := &gorm.Statement{DB: db}
stmt.Parse(model)
tableNames = append(tableNames, fmt.Sprintf(`"%s"`, stmt.Schema.Table))
}
sql := fmt.Sprintf("TRUNCATE TABLE %s RESTART IDENTITY CASCADE",
strings.Join(tableNames, ", "))
db.WithContext(ctx).Exec(sql)
}
Truncating the database between tests we were forcing a pattern of serial execution. If tests run in parallel, a truncation mid-test creates a race condition. To keep things fast and avoid friction both locally and in CI, we opted for logical isolation instead.
Each test creates its own org, scopes all queries to that org's context, and uses unique names derived from its ID. Prior test data remains in the database, but it's never visible to other tests. The tradeoff is worth it. A bit more discipline per test in exchange for a massively faster, fully parallelized suite.
func (s *AppsTestSuite) setupTestData() {
ctx := context.Background()
ctx, s.testAcc = s.service.Seeder.EnsureAccount(ctx, s.T())
_, s.testOrg = s.service.Seeder.EnsureOrg(ctx, s.T())
//...
testApp := &app.App{
ID: domains.NewAppID(),
Name: fmt.Sprintf("app-%s", id),
OrgID: s.testOrg.ID,
CreatedByID: s.testAcc.ID,
}
}
Making HTTP Requests
Tests exercise endpoints through httptest.ResponseRecorder without going over the network:
func (s *ComponentsServiceTestSuite) makeRequest(
method, path string, body interface{},
) *httptest.ResponseRecorder {
var reqBody *bytes.Buffer
if body != nil {
jsonBytes, err := json.Marshal(body)
require.NoError(s.T(), err)
reqBody = bytes.NewBuffer(jsonBytes)
} else {
reqBody = bytes.NewBuffer(nil)
}
req, err := http.NewRequest(method, path, reqBody)
require.NoError(s.T(), err)
req.Header.Set("Content-Type", "application/json")
rr := httptest.NewRecorder()
s.router.ServeHTTP(rr, req)
return rr
}
Writing the Tests
With all the setup handled by the suite, individual tests are focused and readable. We are able to test all the code paths in our Gin API, but we keep external resources mocked (except our databases). At this layer we are only concerned about our GORM setups, and we only validate our interaction with our databases. With all necessary setup managed by the suite, individual tests are concise and legible. Tests are authored based on the defined inputs and outputs of the API. While all code paths within our Gin API are testable, external resources are consistently mocked, with the exceptions of PostgreSQL and ClickHouse.
Table-Driven Tests for CreateComponent
func (s *ComponentsServiceTestSuite) TestCreateComponent() {
path := fmt.Sprintf("/v1/apps/%s/components", s.testApp.ID)
// Seed a dependency component for the dependencies test case
depComponent := s.deps.Seeder.CreateComponent(
s.ctx, s.T(), s.testApp.ID, app.ComponentTypeTerraformModule)
testCases := []struct {
name string
body interface{}
wantStatus int
wantName string
validate func(*app.Component)
}{
{
name: "component with dependencies",
body: CreateComponentRequest{
Name: "foofighters",
VarName: "dep_var",
Dependencies: []string{depComponent.Name},
},
wantStatus: http.StatusCreated,
wantName: "foofighters",
validate: func(c *app.Component) {
assert.Equal(s.T(), "dep_var", c.VarName)
// verify dependency was persisted
var deps []app.ComponentDependency
err := s.deps.DB.WithContext(s.ctx).
Where("component_id = ?", c.ID).
Find(&deps).Error
require.NoError(s.T(), err)
require.Len(s.T(), deps, 1)
assert.Equal(s.T(), depComponent.ID, deps[0].DependencyID)
},
},
{
name: "missing name",
body: map[string]interface{}{"var_name": "foo"},
wantStatus: http.StatusBadRequest,
},
}
for _, tc := range testCases {
s.Run(tc.name, func() {
// reset mock signals before each sub-test
s.mockEvClient.Reset()
rr := s.makeRequest(http.MethodPost, path, tc.body)
require.Equal(s.T(), tc.wantStatus, rr.Code)
if tc.wantStatus == http.StatusCreated {
var resp app.Component
require.NoError(s.T(), json.Unmarshal(rr.Body.Bytes(), &resp))
assert.Equal(s.T(), tc.wantName, resp.Name)
assert.Equal(
s.T(),
app.ComponentStatus("queued"),
resp.Status)
// verify persisted to database
var dbComponent app.Component
err := s.deps.DB.WithContext(s.ctx).First(
&dbComponent, "id = ?", resp.ID).Error
require.NoError(s.T(), err)
assert.Equal(s.T(), tc.wantName, dbComponent.Name)
// verify 3 signals were sent via mock
captured := s.mockEvClient.GetSignals()
require.Len(s.T(), captured, 3, "expected 3 signals")
for _, cs := range captured {
assert.Equal(s.T(), resp.ID, cs.ID,
"signal should target created component")
}
var signalTypes []string
for _, cs := range captured {
sig, ok := cs.Signal.(*signals.Signal)
require.True(s.T(), ok)
signalTypes = append(signalTypes, string(sig.Type))
}
assert.Contains(
s.T(),
signalTypes,
string(signals.OperationCreated))
assert.Contains(
s.T(),
signalTypes,
string(signals.OperationProvision))
assert.Contains(
s.T(),
signalTypes,
string(signals.OperationPollDependencies))
// run custom validation if provided
if tc.validate != nil {
tc.validate(&resp)
}
}
})
}
}
The Payoff
Building our integration test infrastructure on top of Fx has paid dividends in a few concrete ways.
Because every dependency flows through the Fx container, swapping out an external client is a one-liner. Tests that don't care about a specific mock get a sensible default, so test authors only configure the dependencies relevant to what they're testing.
We can also actually trust our GORM interactions. Unit tests with mocked databases give you false confidence. With a real PostgreSQL instance behind every test, we catch constraint violations, bad query assumptions, and subtle GORM gotchas that would otherwise only surface in production.
Finally, new endpoint tests follow a repeatable pattern. Once the Fx scaffold exists, adding coverage for a new service is straightforward. The cognitive overhead of "how do I even set this up" is gone, which means test coverage stops being the thing that gets cut when a deadline is close.
What's Next
Temporal activities call through the same helpers and service layer code that our API tests exercise. So, since we already set up fxtest to test endpoints that interact with Temporal and our event loop signals, the natural next step is reusing some of these test patterns we’ve established to add coverage to Temporal.
Top comments (0)