DEV Community

Dylan Dumont
Dylan Dumont

Posted on

CQRS in Practice: Separating Reads and Writes Without the Hype

Most applications fail at scale not because they cannot write, but because their read models collide with write consistency requirements.

What We're Building

We are designing a user management service for a high-traffic SaaS platform. The system faces a classic bottleneck: administrative updates occur frequently, while public dashboards query user profiles millions of times per day. The goal is to prevent write operations from blocking read operations, which would degrade user experience. This article demonstrates a production-grade implementation of CQRS in Go to decouple these traffic patterns effectively.

Step 1 — Define Interfaces Separately

Establish a clear contract between the command layer and the query layer. This prevents accidental coupling between read and write logic at the design phase. By defining separate interfaces, the compiler guarantees that read code never executes write logic.

type UserCommand interface {
    Execute(ctx context.Context) error
}
type UserQuery interface {
    FindByID(ctx context.Context, id string) (*User, error)
}
Enter fullscreen mode Exit fullscreen mode

This separation enforces that writers do not know about readers, reducing the risk of accidental data corruption during updates.

Step 2 — Implement Command Handlers

Commands modify state and persist the Aggregate Root in the primary database. We accept intent here and validate the input before touching the persistent store. The responsibility lies solely with the command handler to ensure integrity.

type UserHandler struct {
    repository UserRepository
}

func (h *UserHandler) Execute(c UserCommand, ctx context.Context) error {
    // Validation logic
    if c.Name == "" { return ErrMissingName }
    // DB write logic
    return h.repository.Save(ctx, c.GetPayload())
}
Enter fullscreen mode Exit fullscreen mode

Validation and business logic live here exclusively, ensuring that no read-side assumptions leak into the mutation path.

Step 3 — Build Read Models

Queries project data into an optimized structure designed specifically for retrieval. We do not read directly from the command store for every dashboard view because that structure does not exist for efficient scanning. The read model is a simplified representation meant for fast retrieval.

func (h *UserHandler) FindByID(ctx context.Context, id string) (*User, error) {
    // Read from optimized projection
    projection, err := h.readRepo.Get(id)
    if err != nil { return nil, err }
    return &User{ID: id, Name: projection.Name}, nil
}
Enter fullscreen mode Exit fullscreen mode

This ensures fast retrieval without database join costs, maintaining low latency even when the underlying write table is under heavy load.

Step 4 — Synchronize Projections

Read models must stay consistent with the command model eventually, not immediately. We listen to domain events to update projections asynchronously. This asynchronous update decouples read latency from write speed, preventing read locks from holding write transactions.

+----------------+      +----------------+
|   Write Model  | --> |   Read Model   |
|   (Command DB) |      | (Projection DB)|
+----------------+      +----------------+
          |                  |
          v                  v
     [Save Event]     [Update Projection]
          |                  |
          +------ Async Queue -----+
Enter fullscreen mode Exit fullscreen mode

The architecture diagram illustrates how events bridge the gap between state changes and data presentation.

Step 5 — Handle Concurrency

Use optimistic locking in commands to prevent lost updates when multiple requests try to modify the same aggregate. Reads are isolated to avoid dirty reads during updates, ensuring users always see a stable snapshot of data.

func (h *UserHandler) Execute(c UserCommand, ctx context.Context) error {
    user, err := h.repository.LoadForUpdate(ctx, c.UserID)
    if err != nil { return err }
    // Check version/token
    if user.Version != c.Token { return ErrConflict }
    // ... proceed with update
}
Enter fullscreen mode Exit fullscreen mode

This avoids data races between concurrent operations without acquiring heavy database row locks, which would block other users.

Step 6 — Plan for Scale

Evaluate if the read model is shared or partitioned. Consider caching strategies like Redis for high-frequency queries. In a production setting, read paths should be designed to handle traffic bursts independently from write paths.

// Example caching key
func (r *UserRepository) Get(ctx context.Context, id string) (*User, error) {
    cacheKey := "user:" + id
    if user, ok := r.cache.Get(cacheKey); ok {
        return user, nil
    }
    // DB fallback
    user, _ := r.db.Select(ctx, id);
    r.cache.Set(cacheKey, user);
    return user, nil
}
Enter fullscreen mode Exit fullscreen mode

Caching reduces database load on the read path, ensuring the primary database focuses on transactional integrity.

Key Takeaways

  • Separation of Concerns isolates complex write logic from simple read queries, simplifying the mental model for developers.
  • Eventual Consistency is acceptable and necessary for high performance, allowing users to see updates within a few seconds rather than milliseconds.
  • Type Safety in interfaces prevents runtime errors between layers, ensuring the system remains robust as features are added.
  • Asynchronous Sync decouples write throughput from read latency requirements, allowing the system to handle spikes in traffic independently.
  • Optimistic Locking handles concurrency without heavy database locks, significantly improving throughput for concurrent users.

What's Next?

You can explore Event Sourcing to make projections immutable and easier to replay. Implement a Saga pattern for distributed transactions involving multiple aggregates. Optimize your read projection database choice, such as choosing a column store for analytics. Finally, monitor latency metrics to validate consistency windows and ensure the system meets SLAs.

Further Reading

Part of the Architecture Patterns series.

Top comments (0)