Microservices Communication: REST, gRPC, and Message Queues
How your services talk to each other determines your system's resilience, latency, and coupling. Here's when to use each pattern.
REST: The Default
Use REST for synchronous request-response between services:
// Service-to-service REST call
async function getUserFromUserService(userId: string): Promise<User> {
const response = await fetch(`${USER_SERVICE_URL}/users/${userId}`, {
headers: {
'Authorization': `Bearer ${SERVICE_TOKEN}`,
'X-Request-ID': requestId, // For distributed tracing
},
});
if (!response.ok) throw new Error(`User service error: ${response.status}`);
return response.json();
}
Good for: simple queries, when caller needs the response immediately.
gRPC: When Performance Matters
gRPC uses Protocol Buffers (binary) instead of JSON. 5-10x faster for high-volume internal calls:
// user.proto
syntax = "proto3";
service UserService {
rpc GetUser (GetUserRequest) returns (User);
rpc ListUsers (ListUsersRequest) returns (stream User); // Server streaming
}
message GetUserRequest { string user_id = 1; }
message User {
string id = 1;
string email = 2;
string name = 3;
}
import * as grpc from '@grpc/grpc-js';
import * as protoLoader from '@grpc/proto-loader';
const packageDef = protoLoader.loadSync('user.proto');
const proto = grpc.loadPackageDefinition(packageDef) as any;
const client = new proto.UserService(
'user-service:50051',
grpc.credentials.createInsecure()
);
// Call is type-safe from generated types
client.GetUser({ user_id: '123' }, (err: Error, user: User) => {
if (err) throw err;
console.log(user.email);
});
Good for: high-frequency internal calls, streaming, when teams use different languages.
Message Queues: Decouple and Scale
import { SQSClient, SendMessageCommand, ReceiveMessageCommand } from '@aws-sdk/client-sqs';
const sqs = new SQSClient({ region: 'us-east-1' });
// Publisher: fire and forget
await sqs.send(new SendMessageCommand({
QueueUrl: process.env.ORDER_QUEUE_URL,
MessageBody: JSON.stringify({ orderId: '123', type: 'ORDER_CREATED' }),
MessageGroupId: orderId, // For FIFO queues
}));
// Consumer: process messages
async function processMessages() {
const { Messages } = await sqs.send(new ReceiveMessageCommand({
QueueUrl: process.env.ORDER_QUEUE_URL,
MaxNumberOfMessages: 10,
WaitTimeSeconds: 20, // Long polling
}));
for (const msg of Messages ?? []) {
await processOrder(JSON.parse(msg.Body!));
await deleteMessage(msg.ReceiptHandle!);
}
}
Good for: async workflows, when caller doesn't need immediate response, fan-out to multiple consumers.
Decision Matrix
| Pattern | Latency | Coupling | When to Use |
|---|---|---|---|
| REST | Medium | Loose | Simple queries, needs response |
| gRPC | Low | Tight | High-volume, performance-critical |
| Message Queue | High (async) | Decoupled | Background work, fan-out |
For most SaaS products: REST for cross-service queries, queues for async workflows. gRPC when you've benchmarked and proven you need it.
Async job queues (BullMQ), service patterns, and async workflow infrastructure are part of the AI SaaS Starter Kit.
Top comments (0)