In most microservice setups, service-to-service communication starts the same way:
GET /api/v1/users/{id}
It works. It’s familiar. It’s easy to debug.
But it forces service-to-service calls into a URL-driven model.
Internal services aren’t browsers. They don’t benefit from clean URLs or REST-style resource modeling. They don’t need JSON payloads designed around human readability. And they don’t need APIs designed around manual testing workflows.
What they need is a strict contract and a predictable call interface.
They need communication that behaves like calling a function:
- typed requests and responses
- enforced schemas
- consistent error semantics
- backward-compatible evolution
- explicit timeouts and deadlines
With gRPC, your microservices don’t “hit endpoints”. They call methods. You define the interface once using Protocol Buffers, generate strongly typed clients, and treat cross-service communication like a normal function call, except it happens over the network.
In this walkthrough, we’ll build a gRPC service in Go from scratch, implement a client, and cover the production details that actually matter.
The Problem: Why Are Microservices Talking Like Web Browsers?
Say you have two internal services:
- billing service
- auth-service
billing service needs to charge a user. Before doing that, it needs to validate a few things with auth-service:
- does the user exist?
- is the user active?
- what role does the user have?
A common approach is to expose a REST endpoint from auth-service and call it from billing-service:
resp, err := http.Get("http://auth-service:8080/api/v1/users/123")
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
var user struct {
UserID string `json:"userId"`
Active bool `json:"active"`
Role string `json:"role"`
}
if err := json.NewDecoder(resp.Body).Decode(&user); err != nil {
log.Fatal(err)
}
if !user.Active {
log.Fatal("user is not active")
}
This works, and REST is a perfectly valid choice for internal communication.
But it comes with tradeoffs that tend to show up as systems grow.
This isn’t a browser fetching a page, it’s one backend service depending on another backend service. Yet REST forces that dependency to be expressed through URLs, HTTP verbs, and JSON payloads. Over time, those implementation details become the de-facto contract between services.
That introduces a few common pain points:
- The contract is mostly implicit. The client learns the response shape through documentation and conventions, not enforced types.
- Breaking changes are easy to introduce. A renamed JSON field or missing attribute can break consumers at runtime.
- Error semantics rely on discipline. A 404 might mean “user not found”, but it can also mean “wrong route”, “bad version”, or “proxy misconfiguration”.
- JSON adds overhead. It’s text-based, requires encoding/decoding, and failures often surface at runtime.
- Boilerplate spreads everywhere. Every service ends up rewriting HTTP client logic, decoding, validation, and retries.
None of this makes REST “bad”. It just means that for internal service-to-service calls, where you want strict contracts and predictable behavior, REST often starts to feel like the wrong tool for the job.
And that’s usually when teams start looking at gRPC.
REST Inside Microservices Has a Silent Problem: Fake Contracts
The problem isn’t REST itself.
The problem is what REST often turns into inside a microservices environment:
- endpoints become “agreements”
- JSON becomes “schema”
- Slack threads become “documentation”
Unless you enforce schemas and versioning aggressively, the contract between services is mostly social, not technical.
For example, if the auth-service team changes a response from:
{
"active": true
}
to:
{
"isActive": true
}
billing-service still compiles. Tests might even pass if they don’t cover that path.
But production breaks.
And that’s the worst kind of failure:
- builds fine
- deploys fine
- fails at runtime
At that point, you’re not relying on a contract; you’re relying on hope.
What If Services Could Talk Like Functions Instead?
Instead of thinking:
“call this URL and parse whatever JSON comes back”
what if billing-service could just do this:
user, err := authClient.GetUser(ctx, &pb.GetUserRequest{UserId: "123"})
That’s not an endpoint. That’s a method call.
And the difference matters:
- the request is typed
- the response is typed
- the contract is defined in one place
- both sides generate code from the same definition
That’s the gRPC model.
You stop building internal APIs around URLs and start defining service interfaces the same way you’d define a package in Go: by its functions and the data structures they accept and return.
What gRPC Actually Is
gRPC is a service-to-service communication framework based on RPC (Remote Procedure Calls).
Instead of exposing resources through HTTP routes, a service exposes methods. Another service calls those methods using a generated client.
It’s still a network call. You still deal with latency, timeouts, retries, and failures.
The main difference is that gRPC enforces a defined interface using Protocol Buffers.
What Happens When You Call a gRPC Method?
When billing-service calls:
client.GetUser(ctx, req)
this is what happens:
- the request struct is serialized using protobuf
- the payload is sent over HTTP/2
- the server deserializes the request
- the server handler executes
- the response is serialized and returned
- the client deserializes the response into a typed struct
Both sides use generated code from the same .proto definition. That .proto file is the contract.
Step 1: Define the Contract (auth.proto)
📁 proto/auth.proto;
syntax = "proto3";
package auth;
option go_package = "github.com/example/microservices-grpc/proto/authpb;authpb";
service AuthService {
rpc GetUser(GetUserRequest) returns (GetUserResponse);
}
message GetUserRequest {
string user_id = 1;
}
message GetUserResponse {
string user_id = 1;
bool active = 2;
string role = 3;
}
This defines:
- the service interface (AuthService)
- available RPC methods (GetUser)
- request and response message types
Step 2: Generate Go Code
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
Make sure the binaries are in your PATH:
export PATH="$PATH:$(go env GOPATH)/bin"
Now generate the code:
protoc \
--go_out=. \
--go-grpc_out=. \
proto/auth.proto
This generates Go files under:
📁 proto/authpb/
Those files are machine-generated output, not your codebase.
If you ever need to change anything about the API:
- edit the .proto file
- regenerate the Go code again using protoc
Inside those generated files you get:
- request/response structs
- the server interface
- the client stub
That client stub is what makes gRPC calls feel like function calls.
Step 3: Implement auth-service (Server)
📁 auth-service/main.go;
package main
import (
"context"
"log"
"net"
"github.com/example/microservices-grpc/proto/authpb"
"google.golang.org/grpc"
)
type authServer struct {
authpb.UnimplementedAuthServiceServer
}
func (s *authServer) GetUser(ctx context.Context, req *authpb.GetUserRequest) (*authpb.GetUserResponse, error) {
log.Printf("GetUser called with user_id=%s", req.UserId)
// fake DB lookup
if req.UserId == "123" {
return &authpb.GetUserResponse{
UserId: "123",
Active: true,
Role: "premium",
}, nil
}
return &authpb.GetUserResponse{
UserId: req.UserId,
Active: false,
Role: "unknown",
}, nil
}
func main() {
lis, err := net.Listen("tcp", ":50051")
if err != nil {
log.Fatalf("listen failed: %v", err)
}
srv := grpc.NewServer()
authpb.RegisterAuthServiceServer(srv, &authServer{})
log.Println("auth-service listening on :50051")
if err := srv.Serve(lis); err != nil {
log.Fatalf("serve failed: %v", err)
}
}
Step 4: Implement billing-service (Client)
📁 billing-service/main.go;
package main
import (
"context"
"log"
"time"
"github.com/example/microservices-grpc/proto/authpb"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
func main() {
conn, err := grpc.Dial(
"localhost:50051",
grpc.WithTransportCredentials(insecure.NewCredentials()),
)
if err != nil {
log.Fatalf("dial failed: %v", err)
}
defer conn.Close()
client := authpb.NewAuthServiceClient(conn)
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
resp, err := client.GetUser(ctx, &authpb.GetUserRequest{UserId: "123"})
if err != nil {
log.Fatalf("GetUser failed: %v", err)
}
log.Printf("User=%s active=%v role=%s", resp.UserId, resp.Active, resp.Role)
if !resp.Active {
log.Fatal("user not active, abort billing")
}
log.Println("billing can proceed")
}
Note: grpc.WithInsecure() is deprecated. This uses the current supported approach.
Step 5: Run It
At the project root:
📁 go.mod;
module github.com/example/microservices-grpc
go 1.22
require google.golang.org/grpc v1.63.2
Run the services:
Terminal 1:
go run auth-service/main.go
Terminal 2:
go run billing-service/main.go
Expected output:
gRPC Call Flow (Diagram)
Server Streaming
Now let’s extend the example into something that shows where gRPC becomes strictly better than REST for event-style communication.
Say billing-service wants to subscribe to auth-related events like:
- user logged in
- password changed
- account locked
In a REST world, you’d usually end up doing some form of polling:
GET /api/v1/events?since=...
And then you’d run it every few seconds like a caveman with a cron job.
Polling works, but it’s wasteful:
With gRPC, you don’t fake real-time communication.
You just stream.
Defining a Streaming RPC
Update the protobuf contract:
rpc WatchUserEvents(WatchUserEventsRequest) returns (stream UserEvent);
message WatchUserEventsRequest {
string user_id = 1;
}
message UserEvent {
string user_id = 1;
string event_type = 2;
int64 timestamp = 3;
}
That single keyword stream changes everything.
Instead of “request → response”, the server holds the connection open and pushes events as they occur.
Then regenerate the Go code:
protoc --go_out=. --go-grpc_out=. proto/auth.proto
Now both services share the same contract, and your compiler becomes the enforcer of compatibility.
Implementing Server Streaming in auth-service
Inside auth-service/main.go, implement the streaming method:
func (s *authServer) WatchUserEvents(
req *authpb.WatchUserEventsRequest,
stream authpb.AuthService_WatchUserEventsServer,
) error {
log.Printf("WatchUserEvents started for user_id=%s", req.UserId)
events := []string{"LOGIN", "PASSWORD_CHANGED", "ACCOUNT_LOCKED"}
for _, e := range events {
resp := &authpb.UserEvent{
UserId: req.UserId,
EventType: e,
Timestamp: time.Now().Unix(),
}
if err := stream.Send(resp); err != nil {
return err
}
time.Sleep(2 * time.Second)
}
return nil
}
Don’t forget:
import "time"
This example is simplified (we’re just emitting fake events), but the shape is realistic.
In production, this loop would usually be backed by something like:
- a Kafka consumer
- Redis pub/sub
- a database WAL stream
- an internal event bus
The key idea stays the same: the server pushes messages as they happen.
Consuming the Stream in billing-service
On the client side, you call the RPC once and then continuously receive messages:
stream, err := client.WatchUserEvents(ctx, &authpb.WatchUserEventsRequest{
UserId: "123",
})
if err != nil {
log.Fatalf("WatchUserEvents failed: %v", err)
}
for {
event, err := stream.Recv()
if err != nil {
log.Println("stream ended:", err)
break
}
log.Printf("EVENT: %s at %d", event.EventType, event.Timestamp)
}
This is what “real-time service communication” actually looks like in clean engineering terms:
- one connection
- one contract
- structured messages
- backpressure handled by the transport
- no polling loops
This is gRPC solving a real system problem in the most direct way possible.
Final Thoughts
At the end of the day, gRPC just solves a different problem.
If you’re building service-to-service communication, you quickly realize URLs and JSON start feeling like a workaround. You’re passing strings around, hoping everybody remembers the exact response shape, and most breakages only show up at runtime. With gRPC, the .proto file becomes the source of truth, your types are enforced, and calling another service feels like calling a real method, because the client stub is literally generated for that.
That said, gRPC isn’t always the smoothest experience everywhere. Debugging isn’t as simple as running curl and reading JSON. Most times you’ll use grpcurl, Postman, or enable reflection just to inspect and test things quickly. Also, browsers don’t speak gRPC natively, so if your consumers are frontend clients, you’ll probably keep REST at the edge or introduce gRPC-Web / a gateway.
And you still need discipline when evolving schemas. Protobuf makes it easier, but you can’t just reuse field numbers or delete fields carelessly without breaking older clients.
So the rule is pretty simple: if it’s internal microservices talking to each other, gRPC feels natural. If it’s a public API meant for browsers and humans, REST still makes sense.
Thanks for reading.

Top comments (0)