A practical guide for backend engineers
Event‑driven architecture (EDA) has become a cornerstone of modern distributed systems. Whether you’re building microservices, real‑time analytics, or scalable data pipelines, events help you decouple services, scale independently, and react to changes in real time.
In this blog, we’ll walk through how to design and build an event‑driven system using Go and Apache Kafka, with clear concepts, architecture decisions, and real code examples.
1. What Is an Event‑Driven System?
In an event‑driven system:
Producers emit events when something happens
Brokers (like Kafka) persist and distribute those events
Consumers react to events asynchronously
Instead of services calling each other directly (request/response), services communicate by publishing events.
Example events:
OrderCreated
PaymentProcessed
BidSubmitted
ReportGenerated
This pattern leads to:
Loose coupling
Better scalability
Clear separation of responsibilities
2. Why Kafka + Go?
Why Kafka?
Apache Kafka is a distributed event streaming platform that provides:
High throughput
Durability (events are persisted)
Replayability
Strong ordering guarantees within partitions
Kafka is ideal when:
You need reliable event delivery
Consumers need to replay history
Systems must scale horizontally
Why Go?
Go is an excellent fit for event‑driven systems because:
It’s fast and lightweight
Concurrency with goroutines is simple
Binaries are small and easy to deploy
Strong ecosystem for networking and streaming
3. High‑Level Architecture
Let’s start with a simple architecture:
+-------------+ +------------+ +----------------+
| Producer | -----> | Kafka | -----> | Consumer(s) |
| (Go App) | | Topic | | (Go Apps) |
+-------------+ +------------+ +----------------+
4. Choosing a Kafka Client for Go
Popular Kafka libraries for Go:
confluent‑kafka‑go → High performance (librdkafka based)
sarama → Pure Go, very popular
segmentio/kafka‑go → Simple API, Go‑native
For clarity and simplicity, we’ll use kafka-go.
Install it:
go get github.com/segmentio/kafka-go
5. Designing the Event Model
Events should be:
Immutable
Self‑describing
Versioned
Example Event (JSON)
{
"event_type": "OrderCreated",
"event_version": 1,
"event_id": "c8b9c2e1-22a4-4b63-8c34-4ccf4f7f90aa",
"timestamp": "2026-04-16T10:30:00Z",
"payload": {
"order_id": "ORD-123",
"user_id": "USR-456",
"amount": 1499.99
}
}
6. Writing a Kafka Producer in Go
A producer emits events when something meaningful happens.
Producer Example
writer := kafka.NewWriter(kafka.WriterConfig{
Brokers: []string{"localhost:9092"},
Topic: "orders",
Balancer: kafka.LeastBytes,
})
event := []byte({)
"event_type": "OrderCreated",
"event_version": 1,
"order_id": "ORD-123",
"amount": 1499.99
}
err := writer.WriteMessages(context.Background(),
kafka.Message{
Key: []byte("ORD-123"),
Value: event,
},
)
if err != nil {
log.Fatal("failed to write message:", err)
}
Key Points
Use a key to ensure ordering (same key → same partition)
Make producers idempotent if possible
Emit events after a successful state change
7. Writing a Kafka Consumer in Go
Consumers react to events and perform side effects.
Consumer Example
reader := kafka.NewReader(kafka.ReaderConfig{
Brokers: []string{"localhost:9092"},
Topic: "orders",
GroupID: "order-processors",
})
for {
msg, err := reader.ReadMessage(context.Background())
if err != nil {
log.Println("error reading message:", err)
continue
}
log.Printf("Processing event: %s\n", string(msg.Value))
// Process the event
}
Consumer Groups
Kafka distributes partitions across consumers
Each message is processed once per group
You can scale horizontally by adding consumers
8. Handling Failures & Retries
Failures will happen.
Best Practices
Make consumers idempotent
Retry transient failures
Send poison messages to a DLQ (Dead Letter Topic)
Commit offsets after successful processing
Pattern:
Read event
Process
On success → commit offset
On failure → retry or move to DLQ
9. Event‑Driven vs Request‑Driven
| Aspect | Request/Response | Event‑Driven |
|---|---|---|
| Coupling | Tight | Loose |
| Scalability | Harder | Easier |
| Latency | Immediate | Asynchronous |
| Resilience | Lower | Higher |
10. Common Mistakes to Avoid
Treating Kafka like a message queue
Putting business logic in producers
Publishing database‑shaped events
Ignoring schema evolution
Blocking consumers with slow processing
11. When Event‑Driven Architecture Makes Sense
EDA is a great fit when:
Multiple systems react to the same event
You need auditability and replay
You expect growth and change
Near real‑time processing is required
It’s not ideal for:
Simple CRUD apps
Strict request/response workflows
12. Final Thoughts
Go + Kafka is a powerful combination for building scalable, resilient, event‑driven systems.
Start small:
One topic
One producer
One consumer
Then evolve:
Add schemas (Avro/Protobuf)
Add retries and DLQs
Add observability and metrics
Event‑driven systems reward good design upfront, but they scale beautifully when done right.
Top comments (0)