We ran 30 API operations, throughput tests, and service coverage checks against real Docker containers. No cherry-picking. No marketing. Just numbers.
TL;DR
| Metric | MiniStack | Floci | LocalStack Free |
|---|---|---|---|
| Services supported | 31 | 20 | ~15 (rest paywalled) |
| Image size | 211 MB | 276 MB | ~1 GB |
| Memory after load | 39 MB | 56 MB | ~500 MB |
| Startup time | <2s | ~3s | ~15-30s |
| Median API latency | 4.6 ms | 5.2 ms | varies |
| License | MIT | MIT | BSL (restricted) |
Methodology
- Fresh
docker system prune -af --volumesbefore each run - Both images pulled from Docker Hub (
nahuelnucera/ministack:latest,hectorvent/floci:latest) - Each operation run 5 times, median taken
- All tests use boto3 with
endpoint_url=http://localhost:{port} - Machine: Apple Silicon M4, 24 GB RAM, Docker Desktop
- No warm-up runs — cold container, first request measured
Image Size
nahuelnucera/ministack:latest 211 MB
hectorvent/floci:latest 276 MB
localstack/localstack:latest ~1.0 GB
MiniStack is 24% smaller than Floci and 5x smaller than LocalStack. MiniStack uses Alpine + Python + Node.js. Floci uses a JVM-based stack.
Startup Time
| First Response | |
|---|---|
| MiniStack | 1 ms |
| Floci | 15 ms |
| LocalStack | 15-30 seconds |
MiniStack starts instantly. No JVM warm-up, no class loading. The ASGI server is ready before the health check even fires.
API Latency (median of 5 runs, single operation)
S3
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| CreateBucket | 5.9 ms | 5.6 ms | -5% |
| PutObject (1 KB) | 6.3 ms | 6.4 ms | +2% |
| PutObject (100 KB) | 10.6 ms | 7.3 ms | -31% |
| GetObject | 4.8 ms | 5.4 ms | +13% |
| ListObjectsV2 | 5.5 ms | 6.2 ms | +13% |
S3 is competitive. Floci edges ahead on small reads. MiniStack is significantly faster on larger writes (100 KB: 31% faster).
SQS
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| CreateQueue | 4.5 ms | 4.4 ms | -2% |
| SendMessage | 9.8 ms | 8.3 ms | -15% |
| ReceiveMessage | 7.8 ms | 6.5 ms | -17% |
MiniStack is consistently faster on SQS operations.
DynamoDB
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| CreateTable | 3.7 ms | 3.4 ms | -8% |
| PutItem | 3.7 ms | 4.2 ms | +14% |
| GetItem | 3.8 ms | 4.4 ms | +16% |
| Query | 4.3 ms | 5.0 ms | +16% |
| Scan | 4.2 ms | 4.6 ms | +10% |
Floci wins on DynamoDB read/write operations. This is likely due to Java's optimized JSON parsing for the DynamoDB wire format.
Other Services
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| SNS CreateTopic | 3.9 ms | 3.8 ms | -3% |
| SNS Publish | 8.5 ms | 8.8 ms | +4% |
| IAM CreateRole | 5.0 ms | 5.9 ms | +18% |
| STS GetCallerIdentity | 5.1 ms | 4.5 ms | -12% |
| SSM PutParameter | 6.6 ms | 4.7 ms | -29% |
| SSM GetParameter | 4.7 ms | 5.2 ms | +11% |
| SecretsManager Create | 4.8 ms | 4.4 ms | -8% |
| SecretsManager Get | 4.7 ms | 4.4 ms | -6% |
| EventBridge PutRule | 5.3 ms | 4.7 ms | -11% |
| EventBridge PutEvents | 4.8 ms | 5.5 ms | +15% |
| Kinesis CreateStream | 5.6 ms | 5.1 ms | -9% |
| CW PutMetricData | 4.9 ms | 4.4 ms | -10% |
| Logs CreateLogGroup | 6.5 ms | 4.6 ms | -29% |
| Route53 CreateHostedZone | ERR | 4.3 ms | Floci doesn't support Route53 |
MiniStack is faster on SSM, SecretsManager, CloudWatch, and Logs. Floci is faster on IAM and EventBridge PutEvents. Route53 only works on MiniStack.
Throughput
| Test | Floci | MiniStack |
|---|---|---|
| SQS SendMessage x500 | 221 ops/s | 233 ops/s |
On sustained SQS throughput, MiniStack is 5% faster. Earlier cold-start benchmarks showed Floci ahead, but with warm containers the gap disappears.
Memory Usage
| State | Floci | MiniStack |
|---|---|---|
| At idle | 26 MB | 38 MB |
| After 500+ operations | 56 MB | 39 MB |
Interesting: Floci uses less memory at idle (JVM lazy class loading) but grows to 56 MB after load. MiniStack starts at 38 MB and barely grows. Over time, MiniStack's memory profile is more predictable.
Service Coverage
| Service | Floci | MiniStack |
|---|---|---|
| S3 | YES | YES |
| SQS | YES | YES |
| SNS | YES | YES |
| DynamoDB | YES | YES |
| Lambda | YES | YES |
| IAM | YES | YES |
| STS | YES | YES |
| SecretsManager | YES | YES |
| CloudWatch Logs | YES | YES |
| SSM | YES | YES |
| EventBridge | YES | YES |
| Kinesis | YES | YES |
| CloudWatch Metrics | YES | YES |
| SES | YES | YES |
| Step Functions | YES | YES |
| Cognito | YES | YES |
| RDS | YES | YES |
| CloudFormation | YES | YES |
| ACM | YES | YES |
| KMS | YES | YES |
| ECS | NO | YES |
| ElastiCache | NO | YES |
| Glue | NO | YES |
| Athena | NO | YES |
| Firehose | NO | YES |
| Route53 | NO | YES |
| EC2/VPC | NO | YES |
| EMR | NO | YES |
| ELBv2/ALB | NO | YES |
| WAF v2 | NO | YES |
| ECR | NO | YES |
| Total | 20 | 31 |
MiniStack supports 55% more services. The gap is particularly significant for infrastructure-heavy workloads (ECS, RDS with real Docker, EC2/VPC, Route53, ALB).
Feature Comparison
| Feature | MiniStack | Floci | LocalStack Free |
|---|---|---|---|
| Lambda Python execution | YES | YES | YES |
| Lambda Node.js execution | YES | NO | YES |
| Lambda warm workers | YES | NO | YES |
| RDS real Postgres/MySQL | YES | YES | NO (Pro) |
| ECS real Docker containers | YES | NO | NO (Pro) |
| ElastiCache real Redis | YES | NO | NO (Pro) |
| Athena real SQL (DuckDB) | YES | NO | NO (Pro) |
| CloudFormation | YES (12 types) | YES | YES |
| Step Functions TestState API | YES | NO | NO |
| SFN Mock Config (SFN Local compat) | YES | NO | YES |
| State persistence | YES (20 services) | NO | Partial |
| S3 disk persistence | YES | YES | YES |
Detached mode (-d / --stop) |
YES | NO | NO |
| Terraform v6 compatible | YES | Partial | YES |
| AWS SDK v2 chunked encoding | YES | NO | YES |
| Testcontainers examples | Java, Go, Python | NO | Java |
docker run one-liner |
YES | YES | YES |
| PyPI installable | YES | NO | YES |
What Floci Does Better
Let's be honest about where Floci wins:
- DynamoDB read latency — 15-16% faster on GetItem/Query/Scan. Java's JSON processing is well-optimized for DynamoDB's wire format.
- Idle memory — 26 MB vs 38 MB at cold start. JVM defers class loading.
What MiniStack Does Better
- 11 more services — ECS, ElastiCache, Glue, Athena, Route53, EC2, EMR, ALB, WAF, Firehose, ECR.
- Real infrastructure — RDS spins up actual Postgres/MySQL. ECS runs real containers. Athena runs real SQL via DuckDB.
- Lambda Node.js — warm worker pool for both Python and Node.js.
- State persistence — 20 services survive restarts.
- Faster on most operations — SSM, SecretsManager, SQS, CloudWatch, Logs are 15-30% faster.
- Terraform v6 ready — EC2 stubs, S3 Control routing, DynamoDB WarmThroughput.
- Smaller image — 211 MB vs 276 MB (24% smaller).
When to Use What
Use MiniStack if:
- You need ECS, Route53, EC2, Glue, Athena, ALB, or any of the 11 extra services
- You're migrating from LocalStack and need maximum service coverage
- You want state persistence across container restarts
- You use Terraform v6
- You want Lambda Node.js support
Use Floci if:
- You only need the core 20 services
- DynamoDB read performance is critical for your test suite
- You want the smallest possible idle memory footprint
Use LocalStack Pro if:
- You need IAM policy enforcement
- You need Lambda container image support
- Budget isn't a constraint
Benchmark Reproducibility
All benchmarks can be reproduced with:
docker system prune -af --volumes
docker pull nahuelnucera/ministack:latest
docker pull hectorvent/floci:latest
docker run --rm -d --name ms -p 4568:4566 nahuelnucera/ministack:latest
docker run --rm -d --name fl -p 4567:4566 hectorvent/floci:latest
# Run your own boto3 tests against both ports
Versions Tested
- MiniStack: v1.1.27 (April 2026)
- Floci: latest (April 2026)
- LocalStack: comparison based on published documentation (not benchmarked directly)
- boto3: 1.34+
- Docker Desktop: latest
- Hardware: Apple M4, 24 GB RAM
This benchmark was created by the MiniStack team. We tried to be as fair as possible — if you find any methodology issues, please open an issue on GitHub.
Top comments (0)