Welcome to the Go Microservices Boilerplate series!
In this series, we’ll build a reusable, production-ready boilerplate you can use to spin up new microservices quickly. We’ll start small — adding a logger, config loader, and a gRPC server — then gradually add more pieces like Postgres, Redis, health checks, and observability.
By the end, you’ll have a complete working template you can fork, extend, or simply learn from.
Table of Contents
- Project Structure
- Logger Setup
- Configuration Setup
- gRPC Server
- gRPC Interceptors
- Graceful Shutdowns
- Docker Setup
- Makefile
- Conclusion
Project Structure
Let’s start by outlining the directory structure.
At the root, we’ll keep standard files (Dockerfile, Makefile, .env, etc.). The main application code will live inside the internal/
directory, following Go’s convention for private packages.
Here’s what we’ll start with in Part 1:
.
├── proto/ # Protobuf definitions and generated code
├── cmd/ # Application entrypoints (cmd/server/main.go)
├── internal/ # Application modules (logger, config, transports, etc.)
├── Dockerfile # Multi-stage build for dev/prod
├── Makefile # Workflow automation
├── .env.example # Example environment variables
└── README.md # Documentation
Key design choice:
-
cmd/server/main.go
: where the service starts. -
internal/
: holds actual application logic. Inside, we’ll group modules likelogger/
,config/
,transports/grpc/
. Later, we’ll also adddatabase/
,cache/
, and more. -
transports/
: keeps protocol-specific code. For example,grpc/server/
for server code, and latergrpc/client/
if we want to add gRPC clients. This makes it easy to add more protocols (HTTP, WebSockets, etc.) without cluttering the codebase.
Logger Setup
Logging is one of the first things every service needs. Without good logs, debugging production issues is a nightmare.
We’ll define a Logger interface so we can easily swap implementations later (e.g. zerolog, zap, or a custom logger):
type Logger interface {
Info(msg string, fields ...Field)
Warn(msg string, fields ...Field)
Debug(msg string, fields ...Field)
Error(msg string, fields ...Field)
Fatal(msg string, fields ...Field)
Panic(msg string, fields ...Field)
}
type Field struct {
Key string
Value interface{}
}
We’ll use Zerolog for logging. It’s a lightweight, zero-allocation JSON logger that’s fast enough for production and structured out of the box. That means every log entry is machine-readable (great for aggregators like Loki or ELK) while still being human-friendly in development.
Our NewZerologLogger
method initializes the logger with:
- Log level parsing – falls back to info if the input is invalid.
- Output writer – defaults to stderr but can be swapped (e.g., for testing or file logging).
- Timestamps – automatically included with each log line.
Here’s the implementation:
package logger
import (
"io"
"os"
"encoding/json"
"github.com/rs/zerolog"
)
func NewZerologLogger(level string, out io.Writer) *ZerologLogger {
lvl, err := zerolog.ParseLevel(level)
if err != nil {
lvl = zerolog.InfoLevel
}
zerolog.SetGlobalLevel(lvl)
if out == nil {
out = os.Stderr
}
l := zerolog.New(out).With().Timestamp().Logger()
return &ZerologLogger{log: l}
}
Now we can initialize it in cmd/server/main.go
:
package main
import "github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
func main() {
log := logger.NewZerologLogger("info", os.Stderr)
log.Info("Hello, World!", logger.Field{Key: "foo", Value: "bar"})
}
When running the service, we'll see structured JSON logs like:
{
"level": "info",
"foo": "bar",
"time": "2025-09-26T15:23:50Z",
"message": "Hello, World!"
}
By default, this boilerplate keeps log output in structured JSON but during development, you might prefer human-friendly pretty printing. Zerolog supports this via the ConsoleWriter
:
package main
import (
"github.com/rs/zerolog"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
)
func main() {
log := logger.NewZerologLogger("info", zerolog.ConsoleWriter{Out: os.Stderr})
log.Info("Hello, World!", logger.Field{Key: "foo", Value: "bar"})
}
Which produces colorized, easy-to-read output:
The boilerplate comes with unit tests for each module. Even though logging might seem trivial, testing it ensures two key guarantees:
- Log levels are applied correctly, so you don’t silently miss critical information.
- Structured fields are formatted as expected, keeping logs consistent and machine-readable.
Here’s an example test that validates the default log level and structured message output:
package logger_test
import (
"testing"
"bytes"
"github.com/stretchr/testify/assert"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
)
func TestNewZerologLogger_DefaultLevel(t *testing.T) {
var buf bytes.Buffer
l := logger.NewZerologLogger("invalid-level", &buf)
l.Info("hello world", logger.Field{Key: "foo", Value: "bar"})
entry := parseLog(t, &buf)
assert.Equal(t, "info", entry["level"])
assert.Equal(t, "hello world", entry["message"])
assert.Equal(t, "bar", entry["foo"])
}
//simple helper, transforms JSON to map
func parseLog(t *testing.T, buf *bytes.Buffer) map[string]any {
t.Helper()
var logEntry map[string]any
err := json.Unmarshal(buf.Bytes(), &logEntry)
require.NoError(t, err)
return logEntry
}
Let's run the test:
go test ./internal/logger -v
You should see something like this:
=== RUN TestNewZerologLogger_DefaultLevel
--- PASS: TestNewZerologLogger_DefaultLevel (0.00s)
PASS
Configuration Setup
Next, let’s add configuration management. Services shouldn’t rely on hardcoded values — ports, URLs, database credentials should all come from configs.
We’ll use gofor-little/env to load values from a .env file (useful locally), but also support system environment variables (useful in environments like Kubernetes where env variables are injected from ConfigMaps/Secrets).
Here’s the setup:
package config
import (
"os"
"github.com/go-playground/validator/v10"
"github.com/gofor-little/env"
)
type Config struct {
GRPCServer *GRPCServer `validate:"required"`
}
type GRPCServer struct {
URL string `validate:"required,hostname_port"`
}
type LoaderOptions struct {
EnvPath string
EnvLoader func(string) error
Logger logger.Logger
}
//helper for NewConfigWithOptions with default values
func NewConfig() (*Config, error) {
return NewConfigWithOptions(LoaderOptions{
EnvPath: path.Join(rootDir(), "..", ".env"),
})
}
func NewConfigWithOptions(opts LoaderOptions) (*Config, error) {
log := opts.Logger
if log == nil {
log = logger.NewZerologLogger("info", os.Stderr)
}
envLoader := opts.EnvLoader
if envLoader == nil {
envLoader = func(path string) error {
_, err := os.Stat(path)
if err != nil {
return err
}
return env.Load(path)
}
}
if err := envLoader(opts.EnvPath); err == nil {
log.Info("Loaded environment variables from" + opts.EnvPath)
} else {
log.Info("failed to load .env file, using system environment variables")
}
cfg := &Config{
GRPCServer: &GRPCServer{
URL: getEnv("GRPC_SERVER_URL", ":5000"),
},
}
validate := validator.New()
if err := validate.Struct(cfg); err != nil {
return nil, fmt.Errorf("invalid config: %w", err)
}
return cfg, nil
}
func getEnv(key string, defaultVal string) string {
if val := os.Getenv(key); val != "" {
return val
}
return defaultVal
}
This setup gives the config loader flexibility. The LoaderOptions
struct allows you to customize how environment variables are loaded — for example, by specifying a different .env
path or providing your own loading function (useful for tests or different environments). The EnvPath
defines where the .env
file is located, while the EnvLoader
controls how it’s loaded.
Inside NewConfigWithOptions
, the loader first tries to read variables from the .env
file. If it doesn’t exist, it falls back to system environment variables instead. Finally, all fields are validated using go-playground/validator
to ensure the configuration is complete and valid. Centralizing this logic in a single config
package keeps configuration handling consistent across all services.
Usage in main.go
:
package main
import (
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/config"
)
func main() {
log := logger.NewZerologLogger("info", nil)
cfg, err := config.NewConfig()
if err != nil {
log.Fatal(err.Error())
}
log.Info("gRPC server started!", logger.Field{Key: "Addr", Value: cfg.GRPCServer.URL})
}
Let’s write a test to verify config works as expected:
package config_test
import (
"os"
"io"
"testing"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/config"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewConfigWithEnvFile(t *testing.T) {
tmpFile, _ := os.CreateTemp("", "test.env")
defer os.Remove(tmpFile.Name())
tmpFile.Write([]byte(`GRPC_SERVER_URL=127.0.0.1:6000`))
tmpFile.Close()
cfg, err := config.NewConfigWithOptions(config.LoaderOptions{
EnvPath: tmpFile.Name(),
Logger: logger.NewZerologLogger("info", io.Discard), //doesn't output any logs
})
require.NoError(t, err)
assert.Equal(t, "127.0.0.1:6000", cfg.GRPCServer.URL)
}
This test creates a temporary .env
file with a gRPC server config, loads it through NewConfigWithOptions
, and verifies the value is correctly mapped to the struct.
gRPC Server
Now that we have logging and configuration in place, let’s bring our service to life with a gRPC server.
Why gRPC?
gRPC (Google Remote Procedure Call) is a high-performance communication framework built on top of HTTP/2. It:
- Uses Protocol Buffers (Protobufs) instead of JSON. Protobufs are compact, binary-encoded, and strongly typed.
- Supports multiplexing — multiple requests over a single TCP connection.
- Is generally faster and more efficient than REST (often cited as up to 10x faster).
- Is widely used in microservice architectures, where services often sit behind an API Gateway that translates REST ↔ gRPC (since browsers don’t natively support gRPC).
In internal/transports/grpc/server/server.go
, let’s scaffold our server:
package server
import (
"net"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/config"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
helloworld "path-to-root/proto/hello_world"
"google.golang.org/grpc"
)
type Opts struct {
Config *config.GRPCServer
Logger logger.Logger
}
type GRPCServer struct {
Server *grpc.Server
Config *config.GRPCServer
Logger logger.Logger
}
func NewServer(opts *Opts) *GRPCServer {
srv := grpc.NewServer()
return &GRPCServer{
Server: srv,
Config: opts.Config,
Logger: opts.Logger,
}
}
func (s *GRPCServer) ServeListener(listener net.Listener) error {
return s.Server.Serve(listener)
}
func (s *GRPCServer) Serve() error {
listener, err := net.Listen("tcp", s.Config.URL)
if err != nil {
return err
}
s.Logger.Info("gRPC server started", logger.Field{Key: "Addr", Value: s.Config.URL})
return s.ServeListener(listener)
}
The Opts
struct acts as a dependency container for the gRPC server — it holds configuration and logger references, keeping the server constructor clean and extendable. The NewServer
function creates a new gRPC server instance using the provided configuration and logger. The ServeListener
method starts the gRPC server on an existing network listener, which can be handy for testing or more customized setups. Finally, the Serve
method creates a TCP listener using the configured address and begins serving incoming gRPC requests. Together, these functions form a clean and reusable foundation for running gRPC services in your microservice.
Let's start the server in main.go
(in a goroutine since Serve()
blocks):
package main
import (
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/transports/grpc/server"
)
func main() {
//create logger...
//create config...
grpcServer := server.NewServer(&server.Opts{
Config: cfg.GRPCServer,
Logger: log,
})
go func() {
if err := grpcServer.Serve(); err != nil {
log.Fatal(err.Error())
}
}()
}
To make sure our gRPC server starts correctly, we can write a simple test that boots the server on a random free port (:0), waits briefly for startup, and checks the logs for the expected message:
package server_test
import (
"bytes"
"time"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
"github.com/stretchr/testify/assert"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/transports/grpc/server"
)
func TestServe(t *testing.T) {
var buf bytes.Buffer
log := logger.NewZerologLogger("info", &buf)
srv := server.NewServer(&server.Opts{
Config: &config.GRPCServer{URL: ":0"}, // :0 picks a random free port
Logger: log,
})
go func() { _ = srv.Serve() }()
defer srv.Server.Stop()
time.Sleep(100 * time.Millisecond) // give server some time to start
assert.Contains(t, buf.String(), "gRPC server started")
}
Right now, the server runs but doesn’t expose any RPCs. To define RPCs in gRPC, we use protocol buffer (.proto) files, which describe the API contract. A proto file typically contains:
- syntax – the proto version (we’ll use proto3).
- package – a namespace for your definitions, preventing name collisions.
- option go_package – tells the Go compiler where to place the generated code. Without this, Go packages can clash or be generated in undesired paths.
- service – defines a gRPC service and its available RPC methods.
- rpc – describes an individual remote procedure call, with input and output message types.
- message – defines structured request/response payloads.
Let’s create a simple SayHello
RPC that responds with "Hello World":
proto/hello_world/hello_world.proto
syntax = "proto3";
package hello_world;
option go_package = "github.com/sagarmaheshwary/go-microservice-boilerplate/proto/hello_world";
service Greeter {
rpc SayHello(SayHelloRequest) returns (SayHelloResponse);
}
message SayHelloRequest {}
message SayHelloResponse {
string message = 1;
}
To generate Go code from proto files, you’ll need to install:
- protoc compiler
- Go plugins for protobuf and gRPC:
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
Now we can generate the code with below command:
protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative ./proto/hello_world/hello_world.proto
Next, let’s implement the RPC in Go. The generated code from our .proto
file provides an interface (GreeterServer
) that we need to implement. We’ll define our handler in grpc/server/handler/greeter.go
:
package handler
import (
"context"
helloworld "path-to-root/proto/hello_world"
)
type GreeterServer struct {
helloworld.GreeterServer
}
func NewGreeterServer() *GreeterServer {
return &GreeterServer{}
}
func (e *GreeterServer) SayHello(ctx context.Context, in *helloworld.SayHelloRequest) (*helloworld.SayHelloResponse, error) {
return &helloworld.SayHelloResponse{Message: "Hello, World!"}, nil
}
To make this service available, register it inside NewServer()
:
srv := grpc.NewServer()
helloworld.RegisterGreeterServer(srv, handler.NewGreeterServer())
Now we can start the service and test it using grpcurl, a handy CLI tool for interacting with gRPC servers:
grpcurl -proto ./proto/hello_world/hello_world.proto localhost:5000 hello_world.Greeter/SayHello
Expected response:
{
"message": "Hello, World!"
}
gRPC Interceptors
In gRPC, an interceptor works similarly to middleware in HTTP frameworks: it lets you inject logic before and after each RPC call. Common use cases include logging, authentication, metrics, and tracing.
An interceptor gives you access to useful fields such as:
-
info.FullMethod – the full RPC method name (e.g.
/hello_world.Greeter/SayHello
). - Request/response payloads – the actual objects passed to/from the handler.
- Context metadata – request-scoped values, including headers/trailers.
- Execution time – you can measure how long an RPC takes.
- Error – any error returned from the handler.
Unlike HTTP middleware (which can be attached to specific routes), gRPC interceptors are attached at the server level and therefore run for every RPC. If you need to apply logic only to certain RPCs, you can use a switch
or if
block inside the interceptor to match against info.FullMethod
.
Since logging is a core part of this boilerplate, let’s add a simple unary interceptor that logs the RPC method, execution duration, and any error.
Code: grpc/server/interceptor/logger.go
package interceptor
import (
"context"
"fmt"
"time"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/logger"
"google.golang.org/grpc"
)
func LoggingInterceptor(log logger.Logger) grpc.UnaryServerInterceptor {
return func(
ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (resp interface{}, err error) {
start := time.Now()
// Call the actual RPC handler
res, err = handler(ctx, req)
elapsedMs := fmt.Sprintf("%.2fms", time.Since(start).Seconds()*1000)
if err == nil {
log.Info("gRPC request completed",
logger.Field{Key: "method", Value: info.FullMethod},
logger.Field{Key: "duration", Value: elapsedMs},
)
} else {
log.Error("gRPC request failed",
logger.Field{Key: "method", Value: info.FullMethod},
logger.Field{Key: "duration", Value: elapsedMs},
logger.Field{Key: "error", Value: err.Error()},
)
}
return res, err
}
}
let's attach the interceptor when creating the server:
import (
"google.golang.org/grpc"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/transports/grpc/server/interceptor"
)
server := grpc.NewServer(
grpc.UnaryInterceptor(interceptor.LoggingInterceptor(logger)),
)
Example log output:
{
"level": "info",
"method": "/hello_world.Greeter/SayHello",
"duration": "0.00ms",
"time": "2025-09-26T15:34:13Z",
"message": "gRPC request completed"
}
With this setup, every RPC call is logged consistently without adding boilerplate code in each handler.
Graceful Shutdowns
When shutting down, we want the service to finish active requests and close connections cleanly. Without this, you risk dropping in-flight requests or leaving resources (like DB connections) in a bad state.
In Go, the standard way to handle this is with signal.NotifyContext
. It creates a context that gets canceled when the process receives an OS signal (e.g., CTRL+C
in dev, or a SIGTERM
from Kubernetes during pod termination).
Here’s how we use it in main.go
:
package main
import (
"context"
"errors"
"os"
"os/signal"
"github.com/sagarmaheshwary/go-microservice-boilerplate/internal/transports/grpc/server"
)
func main() {
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
defer stop()
//create logger...
//create config...
grpcServer := server.NewServer(&server.Opts{
Config: cfg.GRPCServer,
Logger: log,
})
go func() {
if err := grpcServer.Serve(); err != nil && !errors.Is(err, grpc.ErrServerStopped) {
stop() // cancel context if server crashes
}
}()
// Blocks until we receive an interrupt signal
<-ctx.Done()
log.Info("Signal received, shutting down gracefully...")
grpcServer.Server.GracefulStop()
}
This ensures the gRPC server stops accepting new requests but completes any in-flight ones.
Docker Setup
One of the goals of this boilerplate is to make running and shipping your service as simple as possible. Docker helps us achieve that by packaging the Go service together with all its dependencies into a portable container.
To support different workflows, the provided Dockerfile is multi-stage:
- Builder stage → Compiles the Go binary.
- Production stage → Runs only the compiled binary in a minimal Alpine image.
- Development stage → Runs with Air for hot reloading during local development.
Here’s the Dockerfile
:
# Stage 1: Build Go binary
FROM golang:1.25 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /app/main ./cmd/server/main.go
# Stage 2: Production image (only binary + Alpine)
FROM alpine:3.22 AS production
WORKDIR /app
COPY --from=builder /app/main .
EXPOSE 5000
CMD ["./main"]
# Stage 3: Development image with Air
FROM golang:1.25 AS development
WORKDIR /app
# Copy go.mod and go.sum first for dependency caching
COPY go.mod go.sum ./
RUN go mod download
RUN go install github.com/air-verse/air@v1.52.3
COPY . .
EXPOSE 5000
CMD ["air", "-c", ".air.toml"]
Development mode (hot reload)
Useful when iterating locally. The container mounts your source code, so changes are picked up instantly without rebuilding the image.
docker build --target development -t go-microservice-boilerplate:dev .
docker run -it --rm -p 5000:5000 -v $(pwd):/app go-microservice-boilerplate:dev
Production mode (lightweight & optimized)
Optimized for deployment. The final image contains only the statically compiled binary, making it extremely small and fast to start.
docker build --target production -t go-microservice-boilerplate:latest .
docker run -it --rm -p 5000:5000 go-microservice-boilerplate:latest
Makefile
When working on a project, you often end up typing the same long commands again and again — generating code, building Docker images, or starting the app. A Makefile makes this easier by turning those commands into short, memorable shortcuts.
For example, instead of remembering:
protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative ./proto/**/*.proto
You can just run:
make proto
Note: Using
./proto/**/*.proto
tellsprotoc
to look through all subfolders insideproto/
and generate code for every.proto
file it finds (e.g.proto/hello_world/hello_world.proto
). This is different from the earlier example in the gRPC server section where we only generated code for a single file.
Here’s a simplified Makefile from the boilerplate:
proto:
protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative ./proto/**/*.proto
run:
go run cmd/main.go
docker-build:
docker build -t go-microservice-boilerplate:latest .
docker-run:
docker run -it --rm -p 5000:5000 -v .env:/app/.env go-microservice-boilerplate:latest
Now you can simply run:
make proto # Generate proto code
make run # Start the app locally
make docker-build # Build the Docker image
make docker-run # Run the service as docker container
Think of the Makefile as your project’s command center — one place where you and your teammates can find all the common commands without digging through docs or wikis.
You can use
make help
to find out which commands are available in the boilerplate.
Conclusion
In Part One, we:
- Set up project structure.
- Added logging and configuration.
- Built a gRPC server with an example SayHello RPC.
- Implemented graceful shutdown.
- Wrote a multi-stage Dockerfile for dev/prod.
- Automated workflows with Makefile.
Your boilerplate is now a reusable foundation for any Go microservice — structured, tested, and ready to grow.
In Part Two, we’ll integrate PostgreSQL with GORM, set up migrations and seeders, introduce a service layer pattern, and write integration tests using Testcontainers.
Here’s the code up to this part:
Part One Code Snapshot
And here’s the latest version of the project:
go-microservice-boilerplate
Top comments (0)