DEV Community

Cover image for Ministack, a free LocalStack alternative. v1.0.7 released
Nahuel Nucera
Nahuel Nucera

Posted on

Ministack, a free LocalStack alternative. v1.0.7 released

MiniStack is a free, MIT-licensed local AWS emulator — a drop-in replacement for LocalStack that runs 23 services on a single port with no account required. Today we're shipping v1.0.7.

What's new

Amazon Data Firehose

MiniStack now emulates Amazon Data Firehose (all 12 API operations):

import boto3, base64, json

fh = boto3.client("firehose", endpoint_url="http://localhost:4566",
                  region_name="us-east-1",
                  aws_access_key_id="test", aws_secret_access_key="test")

# Create a delivery stream with S3 destination
fh.create_delivery_stream(
    DeliveryStreamName="my-stream",
    DeliveryStreamType="DirectPut",
    ExtendedS3DestinationConfiguration={
        "BucketARN": "arn:aws:s3:::my-bucket",
        "RoleARN":   "arn:aws:iam::000000000000:role/firehose-role",
        "BufferingHints": {"SizeInMBs": 1, "IntervalInSeconds": 60},
        "Prefix": "events/",
    },
)

# Put records — S3 destination writes synchronously to the local S3 emulator
fh.put_record(
    DeliveryStreamName="my-stream",
    Record={"Data": base64.b64encode(json.dumps({"event": "click"}).encode())},
)

# Batch ingestion
records = [{"Data": base64.b64encode(f"record-{i}".encode())} for i in range(100)]
resp = fh.put_record_batch(DeliveryStreamName="my-stream", Records=records)
assert resp["FailedPutCount"] == 0
Enter fullscreen mode Exit fullscreen mode

Full operation coverage:

  • CreateDeliveryStream / DeleteDeliveryStream / DescribeDeliveryStream / ListDeliveryStreams
  • PutRecord / PutRecordBatch
  • UpdateDestination — concurrency-safe via CurrentDeliveryStreamVersionId
  • TagDeliveryStream / UntagDeliveryStream / ListTagsForDeliveryStream
  • StartDeliveryStreamEncryption / StopDeliveryStreamEncryption

Destination types accepted: ExtendedS3, S3 (deprecated), HttpEndpoint, Redshift, OpenSearch, Splunk, Snowflake, Iceberg. S3 destinations write records to the local S3 emulator synchronously. All other destination types buffer records in-memory (great for testing your producer code without a real backend).

MiniStack's Firehose actually fixes 4 known LocalStack Community bugs:

  • ExtendedS3DestinationConfiguration + PutRecord crashes in LocalStack (issue #5936) — works fine here
  • KinesisStreamAsSource stream creation sometimes fails in LocalStack (issue #1758) — never fails here
  • DescribeDeliveryStream doesn't return HttpEndpointDestinationDescription in LocalStack (issue #3384) — returned correctly here
  • Elasticsearch KeyError: 'IndexName' in LocalStack (issue #5047) — no crash here

Virtual-hosted style S3

AWS SDKs can address S3 buckets via the host header (bucket.s3.amazonaws.com). MiniStack now supports the local equivalent:

s3 = boto3.client(
    "s3",
    endpoint_url="http://localhost:4566",
    config=Config(s3={"addressing_style": "virtual"}),
    ...
)
# Requests sent as: http://my-bucket.localhost:4566/key
# MiniStack rewrites to path-style internally — transparent to your code
Enter fullscreen mode Exit fullscreen mode

DynamoDB OR/AND expression fix

A subtle parser bug caused ConditionExpression and FilterExpression to crash with Invalid expression: Expected RPAREN, got NAME_REF when using numeric ExpressionAttributeNames keys (#0, #1) — which PynamoDB generates automatically on composite key tables.

# This now works correctly
dynamodb.put_item(
    TableName="my-table",
    Item={"pk": {"S": "row#1"}, "sk": {"S": "row#1"}, "updated_at": {"S": "2026-01-01T00:00:00Z"}},
    ConditionExpression="(attribute_not_exists (#0) OR #1 <= :0)",
    ExpressionAttributeNames={"#0": "pk", "#1": "updated_at"},
    ExpressionAttributeValues={":0": {"S": "2026-01-01T00:00:00Z"}},
)
Enter fullscreen mode Exit fullscreen mode

Root cause: The recursive-descent expression evaluator used Python's or/and operators directly (left = left or self._and_expr()). When left was truthy, Python short-circuited and never consumed the right-hand tokens from the stream. The outer expect('RPAREN') then found the wrong token. Fixed by always eagerly evaluating both sides before applying the logical operator.


By the numbers

  • 23 AWS services on a single port
  • 450 integration tests, all passing against the Docker image
  • ~150 MB Docker image, ~30 MB memory at idle, ~2s startup

Get it

# Docker
docker run -p 4566:4566 nahuelnucera/ministack:v1.0.7

# Or docker-compose
curl -O https://raw.githubusercontent.com/Nahuel990/ministack/main/docker-compose.yml
docker compose up
Enter fullscreen mode Exit fullscreen mode

Then point any AWS SDK at http://localhost:4566:

aws --endpoint-url http://localhost:4566 firehose list-delivery-streams
Enter fullscreen mode Exit fullscreen mode

Docker Hub: hub.docker.com/r/nahuelnucera/ministack

Full changelog and source: github.com/Nahuel990/ministack

Top comments (0)