DEV Community

Cover image for Go Microservices Boilerplate Series: From Hello World to Production (Part 2)
Sagar Maheshwary
Sagar Maheshwary

Posted on

Go Microservices Boilerplate Series: From Hello World to Production (Part 2)

Welcome back to the Go Microservices Boilerplate series!

In Part One, we laid the foundation with a config loader, structured logger, gRPC server, graceful shutdowns, containerized workflows, and a Makefile. That was enough to get a service running, but most real microservices also need a database layer.

In this part, we’ll integrate PostgreSQL using GORM, set up migrations and seeders, and introduce a service layer with a UserService. We’ll also write integration tests with Testcontainers to ensure our code works against a real Postgres instance.

By the end, you’ll have a complete request → service → database → response flow, with persistent data and tests that give you confidence it all works together.

Table of Contents

Database Setup

Almost every microservice needs persistence. In this boilerplate, we’ll use PostgreSQL — a reliable, production-grade relational database.

For quick local development or testing, we’ll also support SQLite. It doesn’t require a running database server, which makes it convenient for unit tests or rapid prototyping. However, PostgreSQL remains the primary focus throughout this series.

Our database code lives in internal/database/:

internal/
├── database/
│   ├── database.go   # opens DB connection (Postgres/SQLite)
│   ├── migrations/   # migration files
│   ├── seeder/      # seed data for dev/tests
Enter fullscreen mode Exit fullscreen mode

Configuration

We’ll start by adding database connection settings to our .env file:

# PostgreSQL DSN
DATABASE_DSN=postgres://postgres:password@localhost:5432/boilerplate?sslmode=disable

# Which driver to use (postgres | sqlite)
DATABASE_DRIVER=postgres

# Connection pooling
DATABASE_POOL_MAX_IDLE=10
DATABASE_POOL_MAX_OPEN=100
DATABASE_POOL_MAX_LIFETIME=1h
Enter fullscreen mode Exit fullscreen mode

Next, update config.go with a new Database struct:

type Config struct {
    GRPCServer *GRPCServer `validate:"required"`
    Database   *Database   `validate:"required"`
}

type Database struct {
    DSN                 string        `validate:"required"`
    Driver              string        `validate:"required,oneof=postgres sqlite"`
    PoolMaxIdleConns    int           `validate:"gte=0"`
    PoolMaxOpenConns    int           `validate:"gte=0"`
    PoolConnMaxLifetime time.Duration `validate:"gte=0"` // must be non-negative
}
Enter fullscreen mode Exit fullscreen mode

Finally, load the database config inside NewConfigWithOptions():

    cfg := &Config{
        Database: &Database{
            DSN:                 getEnv("DATABASE_DSN", ""),
            Driver:              getEnv("DATABASE_DRIVER", "postgres"),
            PoolMaxIdleConns:    getEnvInt("DATABASE_POOL_MAX_IDLE", 10),
            PoolMaxOpenConns:    getEnvInt("DATABASE_POOL_MAX_OPEN", 100),
            PoolConnMaxLifetime: getEnvDuration("DATABASE_POOL_MAX_LIFETIME", time.Hour),
        },
        //...
    }
Enter fullscreen mode Exit fullscreen mode

The Database Service

To keep the code modular, we define a simple DatabaseService interface in database.go that abstracts database operations:

type DatabaseService interface {
    DB() *gorm.DB
    Close() error
}
Enter fullscreen mode Exit fullscreen mode

This makes it easy to swap drivers, mock the database in tests, or extend functionality later without rewriting everything.

We then implement NewDatabase to initialize a new database connection:

package database

import (
    "fmt"
    "gorm.io/gorm"
    "gorm.io/driver/sqlite"
    "gorm.io/driver/postgres"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
)

func NewDatabase(opts *Opts) (DatabaseService, error) {
    var (
        db  *gorm.DB
        err error
    )

    switch opts.Config.Driver {
    case "postgres":
        db, err = gorm.Open(postgres.Open(opts.Config.DSN), &gorm.Config{})
    case "sqlite":
        if opts.Config.DSN == "" {
            return nil, fmt.Errorf("invalid DSN: sqlite requires a non-empty DSN")
        }
        db, err = gorm.Open(sqlite.Open(opts.Config.DSN), &gorm.Config{})
    default:
        return nil, fmt.Errorf("unsupported database driver %s", opts.Config.Driver)
    }

    if err != nil {
        return nil, fmt.Errorf("failed to connect to %s: %v", opts.Config.Driver, err)
    }

    sqlDB, err := db.DB()
    if err != nil {
        return nil, fmt.Errorf("failed to get db instance: %v", err)
    }

  //Connection pooling
    sqlDB.SetMaxIdleConns(opts.Config.PoolMaxIdleConns)
    sqlDB.SetMaxOpenConns(opts.Config.PoolMaxOpenConns)
    sqlDB.SetConnMaxLifetime(opts.Config.PoolConnMaxLifetime)

    opts.Logger.Info("Database connected", logger.Field{Key: "driver", Value: opts.Config.Driver})

    return &Database{db: db, Logger: opts.Logger}, nil
}
Enter fullscreen mode Exit fullscreen mode

NewDatabase reads the driver from config, opens the connection using the correct GORM driver, and applies pooling settings for efficient connection reuse under load.

We can now initialize the database in main.go:

package main

import (
    "context"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/config"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/database"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
)

func main() {
    log := logger.NewZerologLogger("info", os.Stderr)

    cfg, err := config.NewConfig(log)
    if err != nil {
        log.Fatal(err.Error())
    }

    db, err := database.NewDatabase(&database.Opts{
        Config: cfg.Database,
        Logger: log,
    })
    if err != nil {
        log.Fatal(err.Error())
    }

    //...grpc server etc
}
Enter fullscreen mode Exit fullscreen mode

Closing Connections Gracefully

When the service shuts down, open connections should be released cleanly. The Close() method ensures that:

func (d *Database) Close() error {
    if d == nil || d.db == nil {
        return fmt.Errorf("cannot close: database is not initialized")
    }
    sqlDB, err := d.db.DB()
    if err != nil {
        return err
    }
    return sqlDB.Close()
}
Enter fullscreen mode Exit fullscreen mode

We can then use it in the graceful shutdown pattern from Part 1:

func main() {
    ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
    defer stop()

    //logger, config, database, grpcServer setup...

    <-ctx.Done()

    grpcServer.Server.GracefulStop()

    if err := db.Close(); err != nil {
        log.Error("failed to close database client", logger.Field{Key: "error", Value: err.Error()})
    }
}
Enter fullscreen mode Exit fullscreen mode

Next, we’ll set up migrations (to version-control schema changes) and seeders (to populate test or development data), followed by a clean service layer built around GORM.

Database Migrations

Schema drift is one of the most common issues in backend projects. When database changes are applied manually, it’s only a matter of time before development, staging, and production databases start to diverge — often leading to subtle bugs that surface at runtime. The best way to prevent this is through migrations: version-controlled SQL scripts that describe schema evolution in a predictable, reversible way.

In this boilerplate, we use golang-migrate/migrate, a production-grade migration tool that integrates well with both automation scripts and CI/CD pipelines. It helps ensure that every environment runs the same schema version — no matter where or when migrations are applied.

To create a new migration, we use the Makefile command:

make migrate-new name=create_users_table
Enter fullscreen mode Exit fullscreen mode

This automatically generates both the “up” and “down” migration files using the proper naming convention:

internal/database/migrations/
├── 000001_create_users_table.up.sql
├── 000001_create_users_table.down.sql
Enter fullscreen mode Exit fullscreen mode

Let’s define a simple users table in our first migration:

-- 000001_create_users_table.up.sql
CREATE TABLE
  IF NOT EXISTS users (
    id BIGSERIAL PRIMARY KEY,
    name VARCHAR(25) NOT NULL,
    email VARCHAR(255) UNIQUE NOT NULL,
    created_at TIMESTAMP DEFAULT NOW (),
    updated_at TIMESTAMP
  );

-- 000001_create_users_table.down.sql
DROP TABLE IF EXISTS users;
Enter fullscreen mode Exit fullscreen mode

Applying or rolling back migrations is just as straightforward. The Makefile includes helper commands that wrap golang-migrate, making it easy to run migrations without typing long CLI commands:

make migrate-up dsn="postgres://postgres:password@localhost:5432/boilerplate?sslmode=disable"
make migrate-down dsn="postgres://postgres:password@localhost:5432/boilerplate?sslmode=disable"
Enter fullscreen mode Exit fullscreen mode

This setup keeps your schema versioned, portable, and reproducible — ensuring that every team member and environment stays in sync as the project evolves.

Database Seeders

Seeders play an essential role in keeping development and testing environments fast, consistent, and predictable. During development, they instantly populate your database with realistic sample data so you can start testing APIs and RPCs without manually inserting rows. In testing, seeders ensure that every run starts from a clean, known state — making your results reproducible across environments and CI pipelines.

All seeders live inside the internal/database/seeder directory:

internal/database/seeder/
├── runner.go
└── user.go
Enter fullscreen mode Exit fullscreen mode

Let’s start simple with a user seeder that inserts a few sample records. This gives our service something to work with right away.

package seeders

import (
    "gorm.io/gorm"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/model"
)

func SeedUsers(db *gorm.DB) error {
    users := []model.User{
        {Name: "Alice", Email: "alice@example.com"},
        {Name: "Bob", Email: "bob@example.com"},
    }

    return db.Create(&users).Error
}
Enter fullscreen mode Exit fullscreen mode

To keep things organized, we register all individual seeders in a central runner (runner.go). This way, running one command can execute all seeders in sequence.

package seeder

import (
    "gorm.io/gorm"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
)

type SeederFunc struct {
    Name string
    Func func(db *gorm.DB) error
}

var seeders = []SeederFunc{
    {Name: "SeedUsers", Func: SeedUsers},
    // Add more seeders here
}

type Opts struct {
    DB  *gorm.DB
    Log logger.Logger
}

func RunAll(opts *Opts) error {
    log := opts.Log

    for _, s := range seeders {
        if err := s.Func(opts.DB); err != nil {
            return err
        }
    }

    log.Info("All seeders completed successfully")
    return nil
}
Enter fullscreen mode Exit fullscreen mode

Finally, we expose a CLI entrypoint cmd/cli/main.go to run all seeders directly from the terminal. This CLI can be extended with more commands in future.

package main

import (
    "os"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/config"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/database"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/database/seeder"
)

func main() {
    log := logger.NewZerologLogger("info", os.Stderr)

    if len(os.Args) < 2 {
        log.Info("Usage: go run cmd/cli/main.go seed")
        os.Exit(1)
    }

    cmd := os.Args[1]

    cfg, err := config.NewConfig(log)
    if err != nil {
        log.Fatal(err.Error())
    }

    switch cmd {
    case "seed":
        db, err := database.NewDatabase(&database.Opts{
            Config: cfg.Database,
            Logger: log,
        })
        if err != nil {
            log.Fatal(err.Error())
        }
        defer db.Close()

        err = seeder.RunAll(&seeder.Opts{
            DB:  db.DB(),
            Log: log,
        })
        if err != nil {
            log.Fatal(err.Error())
        }
    default:
        log.Error("Unknown command " + cmd)
    }
}
Enter fullscreen mode Exit fullscreen mode

We can run all seeders with a single Make command:

make seed
Enter fullscreen mode Exit fullscreen mode

This quickly populates your database with initial data — like test users — so you can immediately start calling APIs and RPCs without manually adding records. It’s a small step, but it makes your local and CI workflows far more efficient and consistent.

Service Layer Pattern (UserService Example)

Now that our database setup is ready, the next step is to structure how we interact with it. In any well-designed microservice, business logic shouldn’t live inside gRPC or HTTP handlers — that’s where the service layer comes in.

The service layer acts as a clean boundary between your transport layer (gRPC/HTTP) and your data layer (Postgres via GORM). It encapsulates all business logic, leaving handlers focused only on request/response handling. This approach keeps your codebase modular, testable, and easier to evolve as the system grows.

Let’s start with a simple UserService, which lives under:

internal/service/user.go
Enter fullscreen mode Exit fullscreen mode

By isolating logic in a dedicated service, we can easily mock it in tests or extend it later without touching transport logic. It also makes refactoring safer — for instance, swapping GORM for another ORM or even a different persistence layer won’t affect higher layers of the application.

Implementation

package service

import (
    "context"
    "gorm.io/gorm"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/database"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/database/model"
)

type UserService interface {
    FindByID(ctx context.Context, id uint) (*model.User, error)
}

type userService struct {
    db *gorm.DB
}

func NewUserService(db database.DatabaseService) UserService {
    return &userService{db: db.DB()}
}

func (s *userService) FindByID(ctx context.Context, id uint) (*model.User, error) {
    var user model.User
    if err := s.db.WithContext(ctx).First(&user, id).Error; err != nil {
        return nil, err
    }
    return &user, nil
}
Enter fullscreen mode Exit fullscreen mode

The UserService interface defines the contract for user-related operations. This abstraction makes it easy to plug in mocks for unit testing or replace the implementation later. Its concrete type, userService, handles actual database operations using GORM — keeping queries neatly contained within the service layer.

Integration Testing with Testcontainers

Now that we’ve defined a clean service layer, let’s make sure our business logic actually works when connected to a real database — not just in mocks.

Unit tests are great for verifying logic in isolation, but they can’t always catch real-world issues like invalid SQL, schema drift, or subtle differences between database drivers. To bridge that gap, we’ll use Testcontainers for Go — a Go library that spins up lightweight, disposable Docker containers during tests.

This allows us to run integration tests against a real Postgres instance, ensuring our code works end-to-end just like it would in production.

All integration tests live under:

internal/tests/
Enter fullscreen mode Exit fullscreen mode

and reusable helpers (like database setup) go under:

internal/tests/testutils/
Enter fullscreen mode Exit fullscreen mode

This keeps our tests organized and makes setup logic easy to reuse across services.

Let’s look at an example that tests UserService.FindByID.
Instead of relying on mocks, this test runs against a temporary Postgres container spun up just for this test:

package service_test

import (
    "context"
    "testing"
    "github.com/stretchr/testify/assert"
    "github.com/stretchr/testify/require"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/database/model"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/service"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/tests/testutils"
)

func TestUserService_FindByID(t *testing.T) {
    db := testutils.SetupPostgres(t)

    // Seed test data
    u := &model.User{Name: "Alice", Email: "alice@example.com"}
    require.NoError(t, db.DB().Create(u).Error)

    userService := service.NewUserService(db)

    got, err := userService.FindByID(context.Background(), u.ID)
    require.NoError(t, err)

    assert.Equal(t, "Alice", got.Name)
    assert.Equal(t, "alice@example.com", got.Email)
}
Enter fullscreen mode Exit fullscreen mode

Setting Up Postgres with Testcontainers

The helper below, SetupPostgres, takes care of everything:

  • Starts a temporary Postgres container using Testcontainers
  • Builds a DSN dynamically
  • Connects through our existing database abstraction
  • Runs embedded migrations before returning the connection

Here’s the complete implementation:

package testutils

import (
    "io"
    "fmt"
    "time"
    "context"
    "testing"
    "github.com/stretchr/testify/require"
    "github.com/golang-migrate/migrate/v4"
    "github.com/golang-migrate/migrate/v4/source/iofs"
    "github.com/golang-migrate/migrate/v4/database/postgres"
    pgcontainer "github.com/testcontainers/testcontainers-go/modules/postgres"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/config"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/database"
    "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
)

func SetupPostgres(t *testing.T) database.DatabaseService {
    ctx := context.Background()
    log := logger.NewZerologLogger("info", io.Discard)

    dbName := "testdb"
    username := "test"
    password := "test"

    pgContainer, err := pgcontainer.Run(ctx,
        "postgres:15-alpine",
        pgcontainer.WithDatabase(dbName),
        pgcontainer.WithUsername(username),
        pgcontainer.WithPassword(password),
        pgcontainer.BasicWaitStrategies(),
    )
    require.NoError(t, err)

    t.Cleanup(func() {
        if err := pgContainer.Terminate(ctx); err != nil {
            t.Fatalf("failed to terminate container: %s", err.Error())
        }
    })

    host, _ := pgContainer.Host(ctx)
    port, _ := pgContainer.MappedPort(ctx, "5432")
    dsn := fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable",
        username,
        password,
        host,
        port.Port(),
        dbName,
    )

    db, err := database.NewDatabase(&database.Opts{
        Config: &config.Database{
            DSN:                 dsn,
            Driver:              "postgres",
            PoolMaxIdleConns:    10,
            PoolMaxOpenConns:    100,
            PoolConnMaxLifetime: time.Hour,
        },
        Logger: log,
    })
    require.NoError(t, err)

    sqlDB, err := db.DB().DB()
    require.NoError(t, err)

    // Migration driver
    driver, err := postgres.WithInstance(sqlDB, &postgres.Config{})
    require.NoError(t, err)

    // Load embedded migrations
    d, err := iofs.New(database.MigrationsFS, "migrations")
    require.NoError(t, err)

    m, err := migrate.NewWithInstance("iofs", d, dbName, driver)
    require.NoError(t, err)

    // Apply migrations before returning
    err = m.Up()
    if err != nil && err != migrate.ErrNoChange {
        t.Fatalf("failed to run migrations: %v", err)
    }

    return db
}
Enter fullscreen mode Exit fullscreen mode

To make our migrations portable, we embed them directly into the Go binary:

// internal/database/migrations_embed.go

package database

import "embed"

//go:embed migrations/*.sql
var MigrationsFS embed.FS
Enter fullscreen mode Exit fullscreen mode

Embedding ensures the tests (and even compiled binaries) always ship with the correct schema — no filesystem dependencies, no surprises in CI/CD.

The first run might take a bit longer because Docker needs to pull the Postgres image.

Finally, run your integration tests via Make:

make test-integration
Enter fullscreen mode Exit fullscreen mode

Keeping integration tests separate from your fast unit tests ensures a smooth developer workflow while still guaranteeing your application works end-to-end in realistic conditions.

Using the UserService in SayHello RPC

In Part 1, our SayHello RPC only returned a static greeting. Now that we’ve built the user service and connected it to the database, let’s update the RPC to demonstrate how application code can integrate with services and fetch data.

We’ll extend the request to include a user_id, and the response will return both a greeting and the corresponding user object from the database.

message HelloRequest {
  uint64 user_id = 1;
}

message HelloResponse {
  string message = 1;
  User user = 2;
}

message User {
  int64 id = 1;
  string name = 2;
  string email = 3;
}
Enter fullscreen mode Exit fullscreen mode

To use UserService in the gRPC handler, we first pass the database from main.go to NewServer and include it in Opts and GRPCServer:

type Opts struct {
    Config   *config.GRPCServer
    Logger   logger.Logger
    Database database.DatabaseService
}

type GRPCServer struct {
    Server   *grpc.Server
    Config   *config.GRPCServer
    Logger   logger.Logger
    Database database.Database
}

func NewServer(opts *Opts) *GRPCServer {
    srv := grpc.NewServer(grpc.UnaryInterceptor(interceptor.LoggerInterceptor(opts.Logger)))
    helloworld.RegisterGreeterServer(srv, handler.NewGreeterServer(
        service.NewUserService(&service.Opts{
            Database: opts.Database,
        }),
    ))

    return &GRPCServer{
        Server: srv,
        Config: opts.Config,
        Logger: opts.Logger,
    }
}
Enter fullscreen mode Exit fullscreen mode

Add UserService to GreeterServer:

type GreeterServer struct {
    helloworld.GreeterServer
    userService service.UserService
}

func NewGreeterServer(userService service.UserService) *GreeterServer {
    return &GreeterServer{userService: userService}
}
Enter fullscreen mode Exit fullscreen mode

Then update SayHello() to query user by user_id:

func (s *Server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloResponse, error) {
    user, err := s.userService.FindByID(ctx, in.UserId)
    if err != nil {
      return nil, status.Errorf(codes.NotFound, "user not found")
    }

    return &pb.HelloResponse{
            Message: fmt.Sprintf("Hello, %s!", user.Name),
            User: &helloworld.User{
                Id:    int64(user.ID),
                Name:  user.Name,
                Email: user.Email,
            },
    }, nil
}
Enter fullscreen mode Exit fullscreen mode

With a user already seeded in the database, you can test the RPC directly:

grpcurl -d '{"user_id": 1}' -proto ./proto/hello_world/hello_world.proto -plaintext localhost:5000 hello_world.Greeter/SayHello
Enter fullscreen mode Exit fullscreen mode

And get back a response:

{
  "message": "Hello, Alice!",
  "user": {
    "id": 1,
    "name": "Alice",
    "email": "alice@example.com"
  }
}
Enter fullscreen mode Exit fullscreen mode

This small change completes the first true end-to-end flow in our boilerplate — from gRPC → service → database → back to gRPC.
It also demonstrates how cleanly each layer interacts, thanks to the structure we’ve built so far.

CONCLUSION

In Part Two, we:

  • Integrated PostgreSQL with GORM.
  • Added migrations and seeders with a CLI entrypoint.
  • Introduced a service layer pattern with a UserService.
  • Wrote integration tests using Testcontainers.
  • Wired the service into our gRPC handler.

Your boilerplate now has a fully functional database layer, test coverage against a real Postgres instance, and a clean service abstraction ready to handle more business logic.

In Part Three, we’ll build on this foundation with caching using Redis, observability with Prometheus + OpenTelemetry, and health checks to make the service production-ready.

Here’s the code up to this part:
Part Two Code Snapshot

And here’s the latest version of the project:
go-microservice-boilerplate

Top comments (0)