DEV Community

Cover image for Hybrid Cloud Stack: Balancing Aurora PostgreSQL and DynamoDB for Optimal Performance
Voskan Voskanyan
Voskan Voskanyan

Posted on

Hybrid Cloud Stack: Balancing Aurora PostgreSQL and DynamoDB for Optimal Performance

At SolarGenix.ai, we are building an AI-driven platform that turns the slow, manual parts of solar proposals into a fast, reliable, and automated flow, from roof detection and shading analysis to financial modeling and polished customer-ready PDFs. We are a startup, and development is moving fast.

This article walks through how we split workloads between Amazon Aurora PostgreSQL and Amazon DynamoDB, what consistency/latency trade-offs we accept, and how a unified data-access layer in Go plus caching lets us keep developer ergonomics high without sacrificing performance or reliability.

Why we run both Aurora PostgreSQL and DynamoDB

Aurora PostgreSQL gives us strong consistency, relational integrity, and powerful SQL for reporting/joins-ideal for workflows that must be correct first and fast second (e.g., billing artifacts, subscription state, proposal audit trails, RBAC metadata). DynamoDB gives us predictable low-latency, elastic throughput, and effortless horizontal scale-ideal for high-velocity, key-based access with simple access patterns (e.g., proposal snapshots, step autosaves, layout/planning intermediates, idempotency records).

On the product side, this split reflects how our engine works: the AI pipeline produces intermediate states and final artifacts that are naturally document-like and accessed by key, while financials, user/org relationships, and compliance data benefit from transactions, joins, and SQL ergonomics. The outcome is a platform that feels instant without compromising accuracy in the places where accuracy is non-negotiable.

Decision guide - which workloads go where?

  • Aurora PostgreSQL (strongly consistent, relational, transactional)
  • DynamoDB (low latency, high throughput, partition-friendly)

A simple rule we use internally: If the access pattern is "get by key, occasionally update, serve fast," it probably belongs in DynamoDB. If we need cross-entity constraints, transactions, or ad-hoc queries, it’s an Aurora problem.

Consistency, latency, and the "fast path"

  • User actions write to Aurora in a transactional way when correctness matters (e.g., plan upgrades), but we project a derived, read-optimized view into DynamoDB (or cache) for the UI fast path.
  • Background processors hydrate DynamoDB with the most-needed fields for the next interaction, turning expensive joins into a single-digit-millisecond key lookup.
  • For rare, cross-cutting queries, we go straight to Aurora and treat the added latency as acceptable (and cache results aggressively).

This gives us strongly consistent writes where we need them and very low-latency reads where we want them, without confusing app developers: they talk to a single repository interface, and the implementation decides when/how to use each store.

Unified data-access layer in Go

We present storage behind a clean interface and keep policy (when to read/write which store) inside the repository, not the handlers. That means product teams ship features without learning two databases’ footguns.

Interface:

// pkg/proposals/repo.go
package proposals

import "context"

type ID string

type Proposal struct {
    ID          ID
    AccountID   string
    Version     int64
    State       string            // "draft", "ready" ...
    Snapshots   map[string]string // lightweight links to artifacts
    UpdatedAt   int64
}

type Repository interface {
    Get(ctx context.Context, id ID) (*Proposal, error)
    Save(ctx context.Context, p *Proposal) error
    Snapshot(ctx context.Context, id ID, label string, ref string) error
}
Enter fullscreen mode Exit fullscreen mode

Implementation sketch (Aurora + DynamoDB + Cache)

// pkg/proposals/repo_hybrid.go
type hybridRepo struct {
    aurora  AuroraStore   // wraps pgx with ctx-aware tracing/retries
    ddb     DynamoStore   // wraps DynamoDB SDK with marshaling helpers
    cache   Cache         // Redis or in-memory with TTL + stampede control
    clock   Clock
    metrics Metrics
}

func (r *hybridRepo) Get(ctx context.Context, id ID) (*Proposal, error) {
    // Try cache
    if p, ok := r.cache.Get(string(id)); ok {
        return p, nil
    }

    // Hot path: DynamoDB by key
    if p, err := r.ddb.GetByID(ctx, string(id)); err == nil && p != nil {
        r.cache.Set(string(id), p, ttlFast())
        return p, nil
    }

    // Fallback: Aurora (authoritative), then project to DDB
    p, err := r.aurora.GetProposal(ctx, string(id))
    if err != nil {
        return nil, err
    }
    _ = r.ddb.Put(ctx, p)
    r.cache.Set(string(id), p, ttlSlow())
    return p, nil
}

func (r *hybridRepo) Save(ctx context.Context, p *Proposal) error {
    // Authoritative write to Aurora (transactional)
    if err := r.aurora.UpsertProposal(ctx, p); err != nil {
        return err
    }
    // Async or inline projection to DynamoDB for the fast path
    _ = r.ddb.Put(ctx, p)
    r.cache.Delete(string(p.ID))
    return nil
}

func (r *hybridRepo) Snapshot(ctx context.Context, id ID, label, ref string) error {
    // Snapshots are key-addressable, perfect for DynamoDB
    if err := r.ddb.AddSnapshot(ctx, string(id), label, ref, r.clock.Now()); err != nil {
        return err
    }
    r.cache.Delete(string(id))
    return nil
}
Enter fullscreen mode Exit fullscreen mode
  • Aurora writes are the source of truth for mutable, relational entities.
  • DynamoDB holds read-optimized projections and append-only events (snapshots, idempotency, counters).
  • Cache shields both and absorbs spikes; cache TTLs reflect staleness tolerance per endpoint.

Read/write patterns that actually matter in production

  • Autosave drafts: write to DynamoDB (cheap, frequent); periodically consolidate into Aurora.
  • Publishing a proposal: transactional write in Aurora; project final state to DynamoDB; bust cache.
  • Fetching the latest proposal for UI: cache → DynamoDB by key → fallback to Aurora (and re-project).
  • Audit/exports: run directly on Aurora with SQL; results cached by hash of the query params.
  • Idempotent APIs: store request hashes in DynamoDB with short TTL; reject duplicates fast.
  • Rate limiting and quotas: DynamoDB counters (or Redis) with atomic increments and per-key TTL.

Caching strategy (simple rules)

  • Per-entity caches keyed by ID with short TTLs (seconds to a minute).
  • Per-query caches keyed by normalized params with longer TTLs only for read-only analytics views.
  • Stampede protection (singleflight) around cold misses; negative caching for known-absent keys.
  • Explicit cache busting on any state transition that affects the fast path.

Observability and operations

  • Emit storage labels on every call: store=aurora|ddb|cache, op=get|put|tx, plus latencies and error classes.
  • Keep service-level SLOs: p95 read latency, error rates, projection lag.
  • Run regular consistency checks that diff a sample of Aurora rows against their DynamoDB projections; alert on shape drift.
  • Backfills and schema evolution run behind feature flags; repositories expose read-only mode if we need to pause writes during critical migrations.

Using Aurora PostgreSQL and DynamoDB together isn’t about hedging bets-it’s about putting each workload where it performs best, then hiding that complexity behind a clean Go API and a disciplined caching layer. That’s how we keep the product feeling instantaneous while preserving the correctness guarantees we need for money, compliance, and trust.

Top comments (0)