DEV Community

Cover image for Floci (LocalStack alternative) storage modes: pick the right tradeoff per service (and never pay for it)
Hector Ventura
Hector Ventura

Posted on

Floci (LocalStack alternative) storage modes: pick the right tradeoff per service (and never pay for it)

State that survives a docker compose down is one of those things you don't think about, until your test suite needs it, your local dev needs it, and your CI pipeline absolutely doesn't.

LocalStack handles persistence with one switch (PERSISTENCE=1) and it's a Pro-only feature. Floci ships four storage modes, all free, all in core, with per-service overrides. Pick the right tradeoff for the job.


The four modes at a glance

Mode Survives restart Write behavior Best for
memory Pure RAM CI, unit tests, ephemeral integration tests
hybrid Async flush to disk Local development (the sweet spot)
persistent Sync write on every change "Don't lose my last write" workflows
wal Append-only log + compaction High-throughput durable workloads

You set a global default, and override per service when one needs different behavior. That's it.


When to use each

memory (default) for CI and ephemeral tests

# docker-compose.yml
services:
  floci:
    image: floci/floci:latest
    ports: 
      - "4566:4566"
    environment:
      FLOCI_STORAGE_MODE: memory
Enter fullscreen mode Exit fullscreen mode

Everything stays in RAM. Container restarts wipe the slate. Fastest possible writes, smallest possible footprint.

This is the right answer for almost every CI pipeline. Each test run starts clean, you don't manage volumes, and you don't fight stale state from a previous job. Combined with Floci's 24ms cold start and 13 MiB idle footprint, you can spin up a fresh emulator per test class without thinking.

hybrid for local development

# docker-compose.yml
services:
  floci:
    image: floci/floci:latest
    ports:
      - "4566:4566"
    volumes:
      - ./data:/app/data
    environment:
      FLOCI_STORAGE_MODE: hybrid
Enter fullscreen mode Exit fullscreen mode

Reads and writes hit memory. A background flush moves data to disk every ~5 seconds. Container restarts pick up where you left off.

This is the mode I run locally. It feels exactly like memory while you're working, no I/O latency on writes, but docker compose down doesn't nuke your seeded test data. You spend less time re-running setup scripts.

persistent when losing the last write hurts

# docker-compose.yml
services:
  floci:
    image: floci/floci:latest
    ports:
      - "4566:4566"
    volumes:
      - ./data:/app/data
    environment:
      FLOCI_STORAGE_MODE: persistent
Enter fullscreen mode Exit fullscreen mode

Every change syncs to disk before responding. Slower writes, but if Docker hard-kills the container at the wrong moment, the data that was acknowledged is genuinely on disk.

Reach for this when you're building something where "did this actually save?" matters, like reproducing a production data corruption issue, or running a long-running local script you don't want to redo.

wal: for high-write workloads that still need durability

# docker-compose.yml
services:
  floci:
    image: floci/floci:latest
    ports: ["4566:4566"]
    volumes:
      - ./data:/app/data
    environment:
      FLOCI_STORAGE_MODE: wal
      FLOCI_STORAGE_WAL_COMPACTION_INTERVAL_MS: 30000
Enter fullscreen mode Exit fullscreen mode

Every mutation goes to an append-only log first, with periodic compaction. You get durability without paying the random-write cost of persistent mode.

Useful when you're stress-testing a service that writes a lot and high-volume DynamoDB, Kinesis-heavy pipelines, anything where persistent becomes the bottleneck but you still want crash-safety.


Per-service overrides

This is the part that doesn't exist anywhere else: set the global mode, then override only the services that need different behavior.

# docker-compose.yml
services:
  floci:
    image: floci/floci:latest
    ports: 
      - "4566:4566"
    volumes:
      - ./data:/app/data
    environment:
      FLOCI_STORAGE_MODE: memory                          # everything is ephemeral by default
      FLOCI_STORAGE_SERVICES_DYNAMODB_MODE: persistent    # except DynamoDB — keep that on disk
      FLOCI_STORAGE_SERVICES_S3_MODE: hybrid              # and S3 — keep buckets across restarts
Enter fullscreen mode Exit fullscreen mode

Want fast tests but keep your seeded DynamoDB tables across restarts? Two env vars.


How this compares to LocalStack

LocalStack persistence is a single boolean (PERSISTENCE=1) that's locked behind the Pro tier. Beyond pricing, the design itself is different:

LocalStack (Pro) Floci
Cost Paid tier Free, MIT-licensed
Granularity Global on/off Four modes, per-service overrides
Write strategy Snapshots (point-in-time) Sync, async, or WAL — your choice
During snapshot Service is locked, requests block No locking, writes never pause
Cross-version Snapshots may break across versions Plain on-disk format
Implementation Python pickle serialization Native, format per service

The locking part is the one most people don't see coming. LocalStack's snapshot mechanism blocks requests to a service while it's being saved, which is fine for shutdown, but surprising mid-test. Floci never pauses writes; durability comes from the storage mode you picked, not from a freeze-and-dump.


A pattern that just works

For most teams, this covers 90% of what you need. CI:

services:
  floci:
    image: floci/floci:latest
    ports:
      - "4566:4566"
    environment:
      FLOCI_STORAGE_MODE: memory
Enter fullscreen mode Exit fullscreen mode

Local development:

services:
  floci:
    image: floci/floci:latest
    ports: 
      - "4566:4566"
    volumes:
      - ./data:/app/data
    environment:
      FLOCI_STORAGE_MODE: hybrid
Enter fullscreen mode Exit fullscreen mode

No auth token, no Pro tier, no surprise locking.

That's the point. Persistence isn't an enterprise feature, it's a basic developer ergonomic. Floci ships it that way.


🔗 github.com/floci-io/floci · 📚 Storage docs · 💬 floci.slack.com

Top comments (0)