DEV Community

Cover image for LocalStack: The Complete Guide to Running AWS Locally — From Zero to Production-Like Pipelines
Mayank Singh
Mayank Singh

Posted on

LocalStack: The Complete Guide to Running AWS Locally — From Zero to Production-Like Pipelines

Why I Wrote This

I spent months fighting with AWS bills, slow feedback loops, and "works on my machine but not on AWS" bugs. Then I discovered LocalStack — a tool that emulates AWS services on your laptop using Docker. It transformed our team's local development: no more shared dev accounts, no more waiting for CloudFormation stacks to deploy just to test a Lambda trigger.

This guide is everything I wish existed when I started. Whether you're setting up LocalStack for the first time or wiring up a multi-service pipeline with Kinesis, Lambda, SQS, and S3, it's all here.


Table of Contents

  1. What is LocalStack?
  2. Quick Docker Primer
  3. Installing LocalStack (Mac / Windows / Linux)
  4. Your First LocalStack Container
  5. Core Concepts: Endpoints, Credentials, and Regions
  6. Setting Up Individual AWS Services
  7. Building a Real Pipeline: Kinesis → Lambda → S3
  8. Adding SQS for Decoupled Processing
  9. Monitoring Everything with CloudWatch
  10. AWS SDK Integration (Node.js / TypeScript)
  11. Docker Compose for the Full Stack
  12. Initialization Scripts — Automating Resource Creation
  13. Advanced Patterns
  14. Troubleshooting Common Issues
  15. Free vs Pro — What You Actually Need
  16. Conclusion

1. What is LocalStack?

LocalStack is a cloud service emulator that runs entirely on your local machine inside a Docker container. It provides the same API endpoints as AWS, so your code, CLI commands, and IaC tools (Terraform, CloudFormation, Serverless Framework) work against it without modification.

Why use it?

  • Zero AWS costs for development and testing
  • Instant feedback — no waiting for cloud deployments
  • Offline development — works without internet
  • Isolated environments — every developer gets their own "AWS"
  • CI/CD integration — run integration tests against real AWS APIs

What it supports (Community / Free tier):

S3, SQS, SNS, Lambda, Kinesis, DynamoDB, CloudWatch, IAM, SSM, STS, CloudFormation, API Gateway, EventBridge, and many more.


2. Quick Docker Primer

LocalStack runs inside a Docker container. If you're already comfortable with Docker, skip ahead. Otherwise, here's the 60-second version:

Docker packages applications into isolated containers that include everything they need to run. Think of it as a lightweight virtual machine.

# Install Docker Desktop (Mac/Windows) from https://docker.com
# Verify installation
docker --version
docker info
Enter fullscreen mode Exit fullscreen mode

Key concepts you'll need:

  • Container: A running instance of an image (like a process)
  • Image: A blueprint for a container (like localstack/localstack:3.8)
  • Volume: Persistent storage that survives container restarts
  • Port mapping: 14566:4566 means "host port 14566 maps to container port 4566"
  • Docker Compose: A YAML file that defines multi-container setups

That's it. Everything else in this guide will explain Docker bits as we go.


3. Installing LocalStack (Mac / Windows / Linux)

Prerequisites

  • Docker Desktop installed and running (download here)
  • Verify: docker info should show no errors

macOS (Homebrew — Recommended)

brew install localstack/tap/localstack-cli
Enter fullscreen mode Exit fullscreen mode

macOS / Linux (Python pip)

python3 -m pip install --upgrade localstack
Enter fullscreen mode Exit fullscreen mode

macOS / Linux (Binary Download)

# macOS (Intel)
curl -Lo localstack-cli.tar.gz \
  https://github.com/localstack/localstack-cli/releases/download/v4.14.0/localstack-cli-4.14.0-darwin-amd64-onefile.tar.gz

# Linux (x86-64)
curl -Lo localstack-cli.tar.gz \
  https://github.com/localstack/localstack-cli/releases/download/v4.14.0/localstack-cli-4.14.0-linux-amd64-onefile.tar.gz

sudo tar xvzf localstack-cli.tar.gz -C /usr/local/bin
Enter fullscreen mode Exit fullscreen mode

Windows

  1. Download the binary from the GitHub releases page
  2. Extract the .zip file
  3. Add the extracted folder to your system PATH
  4. Or use Python: python3 -m pip install --upgrade localstack

Install the AWS CLI (Required)

You also need the AWS CLI to interact with LocalStack:

# macOS
brew install awscli

# Or via pip (all platforms)
pip install awscli

# Verify
aws --version
Enter fullscreen mode Exit fullscreen mode

Install awslocal (Optional but Recommended)

awslocal is a thin wrapper around aws that automatically sets the endpoint to LocalStack:

pip install awscli-local
Enter fullscreen mode Exit fullscreen mode
# Instead of: aws --endpoint-url=http://localhost:4566 s3 ls
# You can just: awslocal s3 ls
Enter fullscreen mode Exit fullscreen mode

Verify Installation

localstack --version
# Output: 4.14.0 (or similar)
Enter fullscreen mode Exit fullscreen mode

4. Your First LocalStack Container

Quick Start

localstack start -d
Enter fullscreen mode Exit fullscreen mode

The -d flag runs it in the background. Check status:

localstack status services
Enter fullscreen mode Exit fullscreen mode

Or Use Docker Directly

docker run -d \
  --name localstack \
  -p 4566:4566 \
  -e SERVICES=s3,sqs,kinesis,lambda,iam,logs,ssm,events \
  -e AWS_DEFAULT_REGION=us-east-1 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  localstack/localstack:3.8
Enter fullscreen mode Exit fullscreen mode

Health Check

curl http://localhost:4566/_localstack/health | python3 -m json.tool
Enter fullscreen mode Exit fullscreen mode

Expected output:

{
  "services": {
    "s3": "running",
    "sqs": "running",
    "kinesis": "running",
    "lambda": "running",
    ...
  }
}
Enter fullscreen mode Exit fullscreen mode

5. Core Concepts: Endpoints, Credentials, and Regions

The Endpoint

All AWS services are available through a single endpoint: http://localhost:4566

Unlike real AWS where S3 is at s3.amazonaws.com and SQS is at sqs.us-east-1.amazonaws.com, LocalStack serves everything from one port.

Credentials

LocalStack doesn't validate credentials, but the AWS CLI and SDKs still require them. Use dummy values:

export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=test
export AWS_DEFAULT_REGION=us-east-1
Enter fullscreen mode Exit fullscreen mode

Account ID

LocalStack uses 000000000000 as the default account ID. You'll see this in ARNs:

arn:aws:kinesis:us-east-1:000000000000:stream/my-stream
arn:aws:sqs:us-east-1:000000000000:my-queue
Enter fullscreen mode Exit fullscreen mode

The --endpoint-url Flag

Every aws CLI command needs to point to LocalStack:

aws --endpoint-url=http://localhost:4566 s3 ls
Enter fullscreen mode Exit fullscreen mode

Or use awslocal to skip it:

awslocal s3 ls
Enter fullscreen mode Exit fullscreen mode

6. Setting Up Individual AWS Services

Let's create each service one by one. Later, we'll wire them together.

S3 — Object Storage

# Create a bucket
awslocal s3 mb s3://my-data-bucket

# Upload a file
echo '{"hello": "world"}' > test.json
awslocal s3 cp test.json s3://my-data-bucket/data/test.json

# List bucket contents
awslocal s3 ls s3://my-data-bucket/data/

# Download a file
awslocal s3 cp s3://my-data-bucket/data/test.json downloaded.json

# Add CORS (needed if a frontend will access S3 directly)
awslocal s3api put-bucket-cors --bucket my-data-bucket --cors-configuration '{
  "CORSRules": [{
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
    "AllowedOrigins": ["*"],
    "ExposeHeaders": ["ETag"]
  }]
}'
Enter fullscreen mode Exit fullscreen mode

Tip: Always set forcePathStyle: true in your S3 SDK clients when using LocalStack. Without it, the SDK tries virtual-hosted-style URLs (bucket.localhost:4566) which don't resolve.

SQS — Message Queues

# Standard queue
awslocal sqs create-queue --queue-name processing-queue

# FIFO queue (guaranteed ordering + deduplication)
awslocal sqs create-queue \
  --queue-name processing-queue.fifo \
  --attributes '{
    "FifoQueue": "true",
    "ContentBasedDeduplication": "true"
  }'

# Create a Dead Letter Queue (DLQ)
awslocal sqs create-queue --queue-name processing-dlq

# Get DLQ ARN
DLQ_ARN=$(awslocal sqs get-queue-attributes \
  --queue-url http://localhost:4566/000000000000/processing-dlq \
  --attribute-names QueueArn \
  --query 'Attributes.QueueArn' --output text)

# Set DLQ policy on main queue
awslocal sqs set-queue-attributes \
  --queue-url http://localhost:4566/000000000000/processing-queue \
  --attributes "{
    \"RedrivePolicy\": \"{\\\"deadLetterTargetArn\\\":\\\"${DLQ_ARN}\\\",\\\"maxReceiveCount\\\":\\\"3\\\"}\"
  }"

# Send a message
awslocal sqs send-message \
  --queue-url http://localhost:4566/000000000000/processing-queue \
  --message-body '{"orderId": "12345", "action": "process"}'

# Receive messages
awslocal sqs receive-message \
  --queue-url http://localhost:4566/000000000000/processing-queue
Enter fullscreen mode Exit fullscreen mode

Kinesis — Event Streaming

# Create a stream
awslocal kinesis create-stream \
  --stream-name event-stream \
  --shard-count 1

# Verify
awslocal kinesis describe-stream --stream-name event-stream

# Put a record
awslocal kinesis put-record \
  --stream-name event-stream \
  --partition-key user-123 \
  --data '{"event": "page_view", "userId": "user-123", "page": "/products"}'

# Read records (get shard iterator first)
SHARD_ITERATOR=$(awslocal kinesis get-shard-iterator \
  --stream-name event-stream \
  --shard-id shardId-000000000000 \
  --shard-iterator-type TRIM_HORIZON \
  --query 'ShardIterator' --output text)

awslocal kinesis get-records --shard-iterator "$SHARD_ITERATOR"
Enter fullscreen mode Exit fullscreen mode

Note: Kinesis data is base64-encoded. Decode it with: echo "<data>" | base64 -d

Lambda — Serverless Functions

First, create a simple handler:

// handler.js
exports.handler = async (event) => {
  console.log('Received event:', JSON.stringify(event, null, 2));

  const records = event.Records || [];
  const processed = records.map(record => {
    // Kinesis records have base64-encoded data
    if (record.kinesis) {
      const payload = Buffer.from(record.kinesis.data, 'base64').toString('utf-8');
      console.log('Kinesis payload:', payload);
      return JSON.parse(payload);
    }
    // SQS records have a body string
    if (record.body) {
      console.log('SQS message:', record.body);
      return JSON.parse(record.body);
    }
    return record;
  });

  return {
    statusCode: 200,
    body: JSON.stringify({ processed: processed.length }),
  };
};
Enter fullscreen mode Exit fullscreen mode

Deploy it:

# Zip the handler
zip function.zip handler.js

# Create the function
awslocal lambda create-function \
  --function-name event-processor \
  --runtime nodejs20.x \
  --zip-file fileb://function.zip \
  --handler handler.handler \
  --role arn:aws:iam::000000000000:role/lambda-role \
  --timeout 30 \
  --memory-size 256

# Wait for it to be active
awslocal lambda wait function-active-v2 \
  --function-name event-processor

# Test invoke
awslocal lambda invoke \
  --function-name event-processor \
  --cli-binary-format raw-in-base64-out \
  --payload '{"test": true}' \
  output.json

cat output.json
Enter fullscreen mode Exit fullscreen mode

IAM — Roles and Policies

Lambda needs an execution role:

# Create the role
awslocal iam create-role \
  --role-name lambda-execution-role \
  --assume-role-policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Principal": {"Service": "lambda.amazonaws.com"},
      "Action": "sts:AssumeRole"
    }]
  }'

# Attach basic Lambda execution policy
awslocal iam attach-role-policy \
  --role-name lambda-execution-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

# Add Kinesis read access
awslocal iam put-role-policy \
  --role-name lambda-execution-role \
  --policy-name KinesisReadPolicy \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Action": [
        "kinesis:GetRecords",
        "kinesis:GetShardIterator",
        "kinesis:DescribeStream",
        "kinesis:ListShards"
      ],
      "Resource": "arn:aws:kinesis:us-east-1:000000000000:stream/*"
    }]
  }'

# Add S3 write access
awslocal iam put-role-policy \
  --role-name lambda-execution-role \
  --policy-name S3WritePolicy \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Action": ["s3:PutObject", "s3:GetObject"],
      "Resource": "arn:aws:s3:::*/*"
    }]
  }'
Enter fullscreen mode Exit fullscreen mode

Note: LocalStack Community edition doesn't enforce IAM policies. Creating roles and policies is still useful for parity with production and to avoid surprises when deploying to real AWS.

CloudWatch Logs — Monitoring

Lambda functions automatically create log groups in LocalStack:

# List log groups
awslocal logs describe-log-groups

# View log streams for a Lambda function
awslocal logs describe-log-streams \
  --log-group-name /aws/lambda/event-processor

# Fetch recent log events
awslocal logs get-log-events \
  --log-group-name /aws/lambda/event-processor \
  --log-stream-name '<stream-name-from-above>'

# Create a custom log group
awslocal logs create-log-group \
  --log-group-name /custom/my-application

# Tail logs (requires aws logs tail — AWS CLI v2)
aws --endpoint-url=http://localhost:4566 logs tail \
  /aws/lambda/event-processor --follow
Enter fullscreen mode Exit fullscreen mode

SSM Parameter Store — Configuration

Store configuration that your services read at runtime:

# Store parameters
awslocal ssm put-parameter \
  --name "/myapp/local/database/url" \
  --value "postgresql://user:pass@db:5432/mydb" \
  --type String

awslocal ssm put-parameter \
  --name "/myapp/local/redis/url" \
  --value "redis://redis:6379" \
  --type String

awslocal ssm put-parameter \
  --name "/myapp/local/api-key" \
  --value "sk-local-test-key" \
  --type SecureString

# Read a parameter
awslocal ssm get-parameter \
  --name "/myapp/local/database/url" \
  --query 'Parameter.Value' --output text

# List parameters by path
awslocal ssm get-parameters-by-path \
  --path "/myapp/local/" \
  --recursive
Enter fullscreen mode Exit fullscreen mode

7. Building a Real Pipeline: Kinesis → Lambda → S3

Now let's wire everything together into a realistic data pipeline.

Architecture

┌──────────────┐     ┌─────────────┐     ┌──────────────────┐     ┌──────────────┐
│  Application  │────▶│   Kinesis    │────▶│  Lambda Function │────▶│   S3 Bucket   │
│  (Producer)   │     │   Stream     │     │  (Processor)     │     │  (Storage)    │
└──────────────┘     └─────────────┘     └──────────────────┘     └──────────────┘
                                                  │
                                                  ▼
                                          ┌──────────────┐
                                          │  CloudWatch   │
                                          │  Logs         │
                                          └──────────────┘
Enter fullscreen mode Exit fullscreen mode

Step 1: Create the Infrastructure

# 1. Create the Kinesis stream
awslocal kinesis create-stream \
  --stream-name event-pipeline \
  --shard-count 1

# 2. Create the S3 bucket for processed data
awslocal s3 mb s3://processed-events

# 3. Create the IAM role (see Section 6)
awslocal iam create-role \
  --role-name pipeline-lambda-role \
  --assume-role-policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Principal": {"Service": "lambda.amazonaws.com"},
      "Action": "sts:AssumeRole"
    }]
  }'
Enter fullscreen mode Exit fullscreen mode

Step 2: Write the Lambda Handler

This Lambda reads from Kinesis, processes the records, and writes results to S3:

// pipeline-handler.js
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');

const s3Client = new S3Client({
  region: process.env.AWS_DEFAULT_REGION || 'us-east-1',
  endpoint: process.env.AWS_ENDPOINT || undefined,
  forcePathStyle: true, // Required for LocalStack
});

const BUCKET_NAME = process.env.S3_BUCKET || 'processed-events';

exports.handler = async (event) => {
  console.log(`Processing ${event.Records.length} Kinesis records`);

  const results = [];

  for (const record of event.Records) {
    // Decode Kinesis record (base64 encoded)
    const payload = Buffer.from(record.kinesis.data, 'base64').toString('utf-8');
    const data = JSON.parse(payload);

    console.log('Processing event:', data);

    // --- Your business logic here ---
    const enrichedData = {
      ...data,
      processedAt: new Date().toISOString(),
      partitionKey: record.kinesis.partitionKey,
      sequenceNumber: record.kinesis.sequenceNumber,
      shardId: record.eventSourceARN.split('/').pop(),
    };

    // Write each record to S3
    const key = `events/${data.eventType}/${Date.now()}-${record.kinesis.sequenceNumber}.json`;

    await s3Client.send(new PutObjectCommand({
      Bucket: BUCKET_NAME,
      Key: key,
      Body: JSON.stringify(enrichedData, null, 2),
      ContentType: 'application/json',
    }));

    console.log(`Saved to s3://${BUCKET_NAME}/${key}`);
    results.push(key);
  }

  console.log(`Successfully processed ${results.length} records`);
  return { statusCode: 200, processedKeys: results };
};
Enter fullscreen mode Exit fullscreen mode

Step 3: Deploy and Connect

# Package the Lambda (with dependencies)
# For a real project, you'd use esbuild or webpack. For this demo:
npm init -y
npm install @aws-sdk/client-s3
zip -r function.zip handler.js node_modules/

# Create the Lambda
awslocal lambda create-function \
  --function-name kinesis-to-s3-processor \
  --runtime nodejs20.x \
  --zip-file fileb://function.zip \
  --handler pipeline-handler.handler \
  --role arn:aws:iam::000000000000:role/pipeline-lambda-role \
  --timeout 60 \
  --memory-size 256 \
  --environment "Variables={
    AWS_ENDPOINT=http://localhost:4566,
    S3_BUCKET=processed-events
  }"

# Wait for function to be active
awslocal lambda wait function-active-v2 \
  --function-name kinesis-to-s3-processor

# Create the Event Source Mapping (Kinesis → Lambda)
awslocal lambda create-event-source-mapping \
  --function-name kinesis-to-s3-processor \
  --event-source-arn arn:aws:kinesis:us-east-1:000000000000:stream/event-pipeline \
  --batch-size 10 \
  --starting-position LATEST \
  --maximum-retry-attempts 3
Enter fullscreen mode Exit fullscreen mode

Step 4: Test the Pipeline

# Publish some events to Kinesis
for i in {1..5}; do
  awslocal kinesis put-record \
    --stream-name event-pipeline \
    --partition-key "user-$((RANDOM % 100))" \
    --cli-binary-format raw-in-base64-out \
    --data "{
      \"eventType\": \"page_view\",
      \"userId\": \"user-$((RANDOM % 100))\",
      \"page\": \"/products/$i\",
      \"timestamp\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"
    }"
  echo "Published event $i"
done

# Wait a few seconds for Lambda to process
sleep 5

# Check S3 for processed files
awslocal s3 ls s3://processed-events/events/ --recursive

# Download and inspect a processed file
awslocal s3 cp s3://processed-events/events/page_view/ ./output/ --recursive
cat output/*.json | python3 -m json.tool
Enter fullscreen mode Exit fullscreen mode

Step 5: Check CloudWatch Logs

# List log groups (Lambda auto-creates one)
awslocal logs describe-log-groups

# Get log streams
awslocal logs describe-log-streams \
  --log-group-name /aws/lambda/kinesis-to-s3-processor

# Read the logs
LOG_STREAM=$(awslocal logs describe-log-streams \
  --log-group-name /aws/lambda/kinesis-to-s3-processor \
  --query 'logStreams[0].logStreamName' --output text)

awslocal logs get-log-events \
  --log-group-name /aws/lambda/kinesis-to-s3-processor \
  --log-stream-name "$LOG_STREAM"
Enter fullscreen mode Exit fullscreen mode

8. Adding SQS for Decoupled Processing

Let's extend the pipeline. After saving to S3, the Lambda also pushes a message to SQS for a downstream notification service to pick up.

Extended Architecture

                                              ┌─────────────┐
                                         ┌───▶│  S3 Bucket   │
                                         │    │  (Storage)   │
┌──────────┐    ┌─────────┐    ┌────────┐│    └─────────────┘
│ Producer  │───▶│ Kinesis  │───▶│ Lambda ││
└──────────┘    └─────────┘    └────────┘│    ┌─────────────┐    ┌───────────────┐
                                         └───▶│  SQS Queue   │───▶│ Notification  │
                                              │  (Buffer)    │    │  Service      │
                                              └─────────────┘    └───────────────┘
                                                    │
                                              ┌─────────────┐
                                              │  SQS DLQ     │
                                              │  (Failures)  │
                                              └─────────────┘
Enter fullscreen mode Exit fullscreen mode

Create the Queues

# Notification queue
awslocal sqs create-queue --queue-name notification-queue

# Dead letter queue for failed notifications
awslocal sqs create-queue --queue-name notification-dlq

# Link DLQ to main queue
DLQ_ARN=$(awslocal sqs get-queue-attributes \
  --queue-url http://localhost:4566/000000000000/notification-dlq \
  --attribute-names QueueArn \
  --query 'Attributes.QueueArn' --output text)

awslocal sqs set-queue-attributes \
  --queue-url http://localhost:4566/000000000000/notification-queue \
  --attributes "{
    \"RedrivePolicy\": \"{\\\"deadLetterTargetArn\\\":\\\"${DLQ_ARN}\\\",\\\"maxReceiveCount\\\":\\\"3\\\"}\"
  }"
Enter fullscreen mode Exit fullscreen mode

Updated Lambda — Write to S3 + SQS

// extended-handler.js
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const { SQSClient, SendMessageBatchCommand } = require('@aws-sdk/client-sqs');

const REGION = process.env.AWS_DEFAULT_REGION || 'us-east-1';
const ENDPOINT = process.env.AWS_ENDPOINT || undefined;

const s3 = new S3Client({
  region: REGION,
  endpoint: ENDPOINT,
  forcePathStyle: true,
});

const sqs = new SQSClient({
  region: REGION,
  endpoint: ENDPOINT,
});

const BUCKET = process.env.S3_BUCKET || 'processed-events';
const QUEUE_URL = process.env.SQS_QUEUE_URL
  || 'http://localhost:4566/000000000000/notification-queue';

exports.handler = async (event) => {
  console.log(`Processing ${event.Records.length} records`);

  const sqsEntries = [];

  for (const [index, record] of event.Records.entries()) {
    const payload = Buffer.from(record.kinesis.data, 'base64').toString('utf-8');
    const data = JSON.parse(payload);

    // 1. Save to S3
    const key = `events/${data.eventType}/${Date.now()}-${index}.json`;
    await s3.send(new PutObjectCommand({
      Bucket: BUCKET,
      Key: key,
      Body: JSON.stringify({ ...data, processedAt: new Date().toISOString() }),
      ContentType: 'application/json',
    }));

    // 2. Queue notification
    sqsEntries.push({
      Id: String(index),
      MessageBody: JSON.stringify({
        type: 'EVENT_PROCESSED',
        s3Key: key,
        eventType: data.eventType,
        userId: data.userId,
        timestamp: new Date().toISOString(),
      }),
    });
  }

  // Send SQS messages in batches of 10 (SQS limit)
  for (let i = 0; i < sqsEntries.length; i += 10) {
    const batch = sqsEntries.slice(i, i + 10);
    await sqs.send(new SendMessageBatchCommand({
      QueueUrl: QUEUE_URL,
      Entries: batch,
    }));
    console.log(`Queued ${batch.length} notification messages`);
  }

  return { statusCode: 200, processed: event.Records.length };
};
Enter fullscreen mode Exit fullscreen mode

Create a Notification Consumer Lambda

// notification-handler.js
exports.handler = async (event) => {
  for (const record of event.Records) {
    const message = JSON.parse(record.body);
    console.log(`Notification: Event "${message.eventType}" processed for user ${message.userId}`);
    console.log(`  S3 location: s3://processed-events/${message.s3Key}`);
    // In production: send email, push notification, webhook, etc.
  }

  return { statusCode: 200, processed: event.Records.length };
};
Enter fullscreen mode Exit fullscreen mode

Deploy and wire it:

# Deploy notification consumer
zip notification.zip notification-handler.js

awslocal lambda create-function \
  --function-name notification-consumer \
  --runtime nodejs20.x \
  --zip-file fileb://notification.zip \
  --handler notification-handler.handler \
  --role arn:aws:iam::000000000000:role/pipeline-lambda-role \
  --timeout 30

# Wire SQS → Lambda
awslocal lambda create-event-source-mapping \
  --function-name notification-consumer \
  --event-source-arn arn:aws:sqs:us-east-1:000000000000:notification-queue \
  --batch-size 10
Enter fullscreen mode Exit fullscreen mode

9. Monitoring Everything with CloudWatch

Viewing Lambda Logs

Every Lambda function automatically logs to CloudWatch:

# List all log groups
awslocal logs describe-log-groups --query 'logGroups[].logGroupName'

# Output:
# [
#   "/aws/lambda/kinesis-to-s3-processor",
#   "/aws/lambda/notification-consumer"
# ]
Enter fullscreen mode Exit fullscreen mode

Create a Simple Log Viewer Script

#!/bin/bash
# view-logs.sh — View logs for any Lambda function

FUNCTION_NAME=${1:-"kinesis-to-s3-processor"}
LOG_GROUP="/aws/lambda/$FUNCTION_NAME"
ENDPOINT="http://localhost:4566"

echo "=== Logs for $FUNCTION_NAME ==="

# Get latest log stream
STREAM=$(aws --endpoint-url=$ENDPOINT logs describe-log-streams \
  --log-group-name "$LOG_GROUP" \
  --order-by LastEventTime \
  --descending \
  --query 'logStreams[0].logStreamName' \
  --output text 2>/dev/null)

if [ "$STREAM" = "None" ] || [ -z "$STREAM" ]; then
  echo "No log streams found."
  exit 0
fi

# Fetch events
aws --endpoint-url=$ENDPOINT logs get-log-events \
  --log-group-name "$LOG_GROUP" \
  --log-stream-name "$STREAM" \
  --query 'events[].message' \
  --output text
Enter fullscreen mode Exit fullscreen mode

Usage:

chmod +x view-logs.sh
./view-logs.sh kinesis-to-s3-processor
./view-logs.sh notification-consumer
Enter fullscreen mode Exit fullscreen mode

Custom CloudWatch Metrics (Advanced)

# Push a custom metric
awslocal cloudwatch put-metric-data \
  --namespace "MyApp/Pipeline" \
  --metric-name "EventsProcessed" \
  --value 42 \
  --unit Count

# Query metrics
awslocal cloudwatch get-metric-statistics \
  --namespace "MyApp/Pipeline" \
  --metric-name "EventsProcessed" \
  --start-time "$(date -u -v-1H +%Y-%m-%dT%H:%M:%SZ)" \
  --end-time "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
  --period 300 \
  --statistics Sum
Enter fullscreen mode Exit fullscreen mode

10. AWS SDK Integration (Node.js / TypeScript)

In a real application, you interact with LocalStack through the AWS SDK — not the CLI. Here's how to set up clients that work against both LocalStack and production AWS.

Pattern: Environment-Aware Client Factory

// aws-clients.ts
import { S3Client, S3ClientConfig } from '@aws-sdk/client-s3';
import { SQSClient } from '@aws-sdk/client-sqs';
import { KinesisClient } from '@aws-sdk/client-kinesis';
import { LambdaClient } from '@aws-sdk/client-lambda';
import { CloudWatchLogsClient } from '@aws-sdk/client-cloudwatch-logs';

const REGION = process.env.AWS_DEFAULT_REGION || 'us-east-1';
const ENDPOINT = process.env.AWS_ENDPOINT; // Set only for LocalStack

// S3 — Note forcePathStyle is critical for LocalStack
export function createS3Client(): S3Client {
  const config: S3ClientConfig = { region: REGION };
  if (ENDPOINT) {
    config.endpoint = ENDPOINT;
    config.forcePathStyle = true; // ← Required for LocalStack!
  }
  return new S3Client(config);
}

// SQS
export function createSQSClient(): SQSClient {
  return new SQSClient({
    region: REGION,
    ...(ENDPOINT && { endpoint: ENDPOINT }),
  });
}

// Kinesis
export function createKinesisClient(): KinesisClient {
  return new KinesisClient({
    region: REGION,
    ...(ENDPOINT && { endpoint: ENDPOINT }),
    retryMode: 'adaptive', // Better throttle handling
  });
}

// Lambda
export function createLambdaClient(): LambdaClient {
  return new LambdaClient({
    region: REGION,
    ...(ENDPOINT && { endpoint: ENDPOINT }),
  });
}

// CloudWatch Logs
export function createCloudWatchLogsClient(): CloudWatchLogsClient {
  return new CloudWatchLogsClient({
    region: REGION,
    ...(ENDPOINT && { endpoint: ENDPOINT }),
  });
}
Enter fullscreen mode Exit fullscreen mode

Pattern: Singleton Client (Performance — Lambda Best Practice)

// In Lambda, reuse clients across invocations (connection pooling)
let s3Client: S3Client | null = null;

export function getS3Client(): S3Client {
  if (!s3Client) {
    s3Client = createS3Client();
  }
  return s3Client;
}
Enter fullscreen mode Exit fullscreen mode

Pattern: S3 ETag-Based Caching

Avoid unnecessary S3 downloads by checking if the object changed:

import { HeadObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';

class ConfigLoader {
  private etag: string | null = null;
  private cachedConfig: any = null;

  async loadConfig(bucket: string, key: string): Promise<any> {
    const s3 = getS3Client();

    // Check ETag (lightweight HEAD request)
    const head = await s3.send(new HeadObjectCommand({ Bucket: bucket, Key: key }));

    if (head.ETag === this.etag && this.cachedConfig) {
      console.log('Config unchanged, using cache');
      return this.cachedConfig;
    }

    // Config changed — download it
    const response = await s3.send(new GetObjectCommand({ Bucket: bucket, Key: key }));
    const body = await response.Body?.transformToString();
    this.cachedConfig = JSON.parse(body!);
    this.etag = head.ETag!;

    return this.cachedConfig;
  }
}
Enter fullscreen mode Exit fullscreen mode

Pattern: SQS Batch Publisher with Chunking

import { SendMessageBatchCommand, SendMessageBatchRequestEntry } from '@aws-sdk/client-sqs';

async function publishBatch(
  messages: Array<{ id: string; body: object; groupId?: string }>,
  queueUrl: string
): Promise<void> {
  const sqs = createSQSClient();

  // SQS allows max 10 messages per batch
  const chunks = chunkArray(messages, 10);

  for (const chunk of chunks) {
    const entries: SendMessageBatchRequestEntry[] = chunk.map((msg) => ({
      Id: msg.id,
      MessageBody: JSON.stringify(msg.body),
      ...(msg.groupId && {
        MessageGroupId: msg.groupId,
        MessageDeduplicationId: `${msg.id}-${Date.now()}`,
      }),
    }));

    const result = await sqs.send(new SendMessageBatchCommand({
      QueueUrl: queueUrl,
      Entries: entries,
    }));

    if (result.Failed?.length) {
      console.error('Failed to send:', result.Failed);
    }
  }
}

function chunkArray<T>(arr: T[], size: number): T[][] {
  return Array.from({ length: Math.ceil(arr.length / size) }, (_, i) =>
    arr.slice(i * size, i * size + size)
  );
}
Enter fullscreen mode Exit fullscreen mode

11. Docker Compose for the Full Stack

Here's a production-grade docker-compose.yml that sets up LocalStack alongside your application:

version: "3.8"

# Reusable AWS configuration
x-common-aws-config: &common-aws-config
  AWS_ACCESS_KEY_ID: test
  AWS_SECRET_ACCESS_KEY: test
  AWS_DEFAULT_REGION: us-east-1
  AWS_ENDPOINT: http://localstack:4566

services:
  # ──────────────────────────────────────────
  # LocalStack — AWS emulator
  # ──────────────────────────────────────────
  localstack:
    image: localstack/localstack:3.8
    container_name: localstack
    ports:
      - "4566:4566"         # Gateway (all services)
    environment:
      <<: *common-aws-config
      SERVICES: kinesis,s3,sqs,iam,lambda,logs,ssm,cloudwatch,events
      PERSISTENCE: "1"                           # Survive restarts
      LAMBDA_KEEPALIVE_MS: 60000                 # Reuse Lambda containers
    volumes:
      - localstack-data:/var/lib/localstack
      - /var/run/docker.sock:/var/run/docker.sock  # Required for Lambda
    healthcheck:
      test: >
        curl -sf http://localhost:4566/_localstack/health || exit 1
      interval: 3s
      timeout: 5s
      retries: 15
      start_period: 5s

  # ──────────────────────────────────────────
  # Init container — creates all AWS resources
  # ──────────────────────────────────────────
  localstack-init:
    image: amazon/aws-cli:latest
    container_name: localstack-init
    depends_on:
      localstack:
        condition: service_healthy
    environment:
      <<: *common-aws-config
      AWS_ENDPOINT_URL: http://localstack:4566
    entrypoint: ["/bin/bash", "/scripts/init.sh"]
    volumes:
      - ./scripts/init-localstack.sh:/scripts/init.sh:ro

  # ──────────────────────────────────────────
  # Your application
  # ──────────────────────────────────────────
  app:
    build: .
    depends_on:
      localstack-init:
        condition: service_completed_successfully
    environment:
      <<: *common-aws-config
      S3_BUCKET: processed-events
      KINESIS_STREAM: event-pipeline
      SQS_QUEUE_URL: http://localstack:4566/000000000000/notification-queue

volumes:
  localstack-data:
Enter fullscreen mode Exit fullscreen mode

Key Notes

Setting Purpose
PERSISTENCE: "1" Resources survive container restarts
LAMBDA_KEEPALIVE_MS: 60000 Reuses Lambda containers (faster warm starts)
docker.sock volume Required for Lambda execution (spawns sibling containers)
x-common-aws-config YAML anchor avoids repeating credentials everywhere
service_healthy Init script waits for LocalStack to be fully ready
service_completed_successfully App waits for all resources to be created

12. Initialization Scripts — Automating Resource Creation

Create a script that runs once when LocalStack starts, setting up all your AWS resources:

#!/bin/bash
# scripts/init-localstack.sh
set -euo pipefail

ENDPOINT="http://localstack:4566"

echo "⏳ Waiting for LocalStack..."
until curl -sf "$ENDPOINT/_localstack/health" > /dev/null 2>&1; do
  sleep 2
done
echo "✅ LocalStack is ready"

# ── Kinesis Streams ──────────────────────────
echo "Creating Kinesis streams..."
aws --endpoint-url "$ENDPOINT" kinesis create-stream \
  --stream-name event-pipeline \
  --shard-count 1

aws --endpoint-url "$ENDPOINT" kinesis wait stream-exists \
  --stream-name event-pipeline

echo "✅ Kinesis: event-pipeline"

# ── S3 Buckets ───────────────────────────────
echo "Creating S3 buckets..."
for BUCKET in processed-events app-config deployment-artifacts; do
  aws --endpoint-url "$ENDPOINT" s3 mb "s3://$BUCKET" 2>/dev/null || true
  echo "  ✅ s3://$BUCKET"
done

# CORS for buckets accessed by frontend
aws --endpoint-url "$ENDPOINT" s3api put-bucket-cors \
  --bucket processed-events \
  --cors-configuration '{
    "CORSRules": [{
      "AllowedHeaders": ["*"],
      "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
      "AllowedOrigins": ["*"],
      "ExposeHeaders": ["ETag"]
    }]
  }'

# ── SQS Queues ───────────────────────────────
echo "Creating SQS queues..."

# Dead Letter Queue first
aws --endpoint-url "$ENDPOINT" sqs create-queue \
  --queue-name notification-dlq

# Main queue with DLQ
DLQ_ARN="arn:aws:sqs:us-east-1:000000000000:notification-dlq"
aws --endpoint-url "$ENDPOINT" sqs create-queue \
  --queue-name notification-queue \
  --attributes "{
    \"RedrivePolicy\": \"{\\\"deadLetterTargetArn\\\":\\\"${DLQ_ARN}\\\",\\\"maxReceiveCount\\\":\\\"3\\\"}\"
  }"

# FIFO queue for ordered processing
aws --endpoint-url "$ENDPOINT" sqs create-queue \
  --queue-name processing-queue.fifo \
  --attributes '{
    "FifoQueue": "true",
    "ContentBasedDeduplication": "true"
  }'

echo "✅ SQS queues created"

# ── IAM Roles ────────────────────────────────
echo "Creating IAM roles..."
aws --endpoint-url "$ENDPOINT" iam create-role \
  --role-name lambda-execution-role \
  --assume-role-policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Principal": {"Service": "lambda.amazonaws.com"},
      "Action": "sts:AssumeRole"
    }]
  }' 2>/dev/null || true

aws --endpoint-url "$ENDPOINT" iam attach-role-policy \
  --role-name lambda-execution-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole \
  2>/dev/null || true

echo "✅ IAM roles created"

# ── SSM Parameters ───────────────────────────
echo "Storing SSM parameters..."
declare -A PARAMS=(
  ["/myapp/local/s3/bucket"]="processed-events"
  ["/myapp/local/kinesis/stream-arn"]="arn:aws:kinesis:us-east-1:000000000000:stream/event-pipeline"
  ["/myapp/local/sqs/notification-queue-url"]="http://localstack:4566/000000000000/notification-queue"
)

for KEY in "${!PARAMS[@]}"; do
  aws --endpoint-url "$ENDPOINT" ssm put-parameter \
    --name "$KEY" \
    --value "${PARAMS[$KEY]}" \
    --type String \
    --overwrite 2>/dev/null || true
  echo "  ✅ $KEY"
done

# ── Upload seed config to S3 ────────────────
echo '{"version":"1.0","features":{"darkMode":true}}' | \
  aws --endpoint-url "$ENDPOINT" s3 cp - s3://app-config/config.json

echo ""
echo "🎉 All LocalStack resources initialized!"
echo "   Kinesis: event-pipeline (1 shard)"
echo "   S3:      processed-events, app-config, deployment-artifacts"
echo "   SQS:     notification-queue (+ DLQ), processing-queue.fifo"
echo "   IAM:     lambda-execution-role"
echo "   SSM:     3 parameters under /myapp/local/"
Enter fullscreen mode Exit fullscreen mode

Pro tip: Use 2>/dev/null || true on creation commands so the script is idempotent — re-running it won't fail if resources already exist.


13. Advanced Patterns

Lambda Aliases for Zero-Downtime Deployment

In production, you'd use aliases to switch between Lambda versions atomically:

# Publish a version
VERSION=$(awslocal lambda publish-version \
  --function-name event-processor \
  --query 'Version' --output text)

# Create/update alias
awslocal lambda create-alias \
  --function-name event-processor \
  --name live \
  --function-version "$VERSION"

# Point event source mapping to the alias
awslocal lambda create-event-source-mapping \
  --function-name "event-processor:live" \
  --event-source-arn arn:aws:kinesis:us-east-1:000000000000:stream/event-pipeline \
  --batch-size 10 \
  --starting-position LATEST
Enter fullscreen mode Exit fullscreen mode

When deploying a new version, update the alias — the event source mapping automatically picks it up.

EventBridge Scheduled Rules

Trigger Lambda on a cron schedule:

# Create a rule that fires every minute
awslocal events put-rule \
  --name every-minute-rule \
  --schedule-expression "rate(1 minute)"

# Add Lambda as target
awslocal events put-targets \
  --rule every-minute-rule \
  --targets '[{
    "Id": "1",
    "Arn": "arn:aws:lambda:us-east-1:000000000000:function:event-processor"
  }]'

# Grant EventBridge permission to invoke Lambda
awslocal lambda add-permission \
  --function-name event-processor \
  --statement-id eventbridge-invoke \
  --action lambda:InvokeFunction \
  --principal events.amazonaws.com \
  --source-arn arn:aws:events:us-east-1:000000000000:rule/every-minute-rule
Enter fullscreen mode Exit fullscreen mode

Lambda Failure Destinations (S3)

Capture failed Lambda invocations for debugging:

awslocal s3 mb s3://lambda-failures

awslocal lambda put-function-event-invoke-config \
  --function-name event-processor \
  --maximum-retry-attempts 2 \
  --destination-config '{
    "OnFailure": {
      "Destination": "arn:aws:s3:::lambda-failures"
    }
  }'
Enter fullscreen mode Exit fullscreen mode

Docker Networking Gotcha

When Lambda containers try to reach LocalStack internally, they can't use localhost. The init script may need to rewrite URLs:

# LocalStack returns URLs like:
#   http://sqs.localhost.localstack.cloud:4566/000000000000/my-queue
#
# But Lambda containers need:
#   http://localstack:4566/000000000000/my-queue

QUEUE_URL_RAW=$(aws --endpoint-url "$ENDPOINT" sqs get-queue-url \
  --queue-name my-queue --query 'QueueUrl' --output text)

QUEUE_URL=$(echo "$QUEUE_URL_RAW" | \
  sed 's|http://sqs\.[^/]*localhost\.localstack\.cloud:[0-9]*/|http://localstack:4566/|')
Enter fullscreen mode Exit fullscreen mode

Serverless Framework + LocalStack

If you use the Serverless Framework, the serverless-localstack plugin makes deployment seamless:

# serverless.yml
plugins:
  - serverless-esbuild
  - serverless-localstack

custom:
  localstack:
    stages:
      - local
    host: http://localstack
    edgePort: 4566
    autostart: false
    endpoints:
      S3: http://localstack:4566
      Lambda: http://localstack:4566
      Kinesis: http://localstack:4566
      SQS: http://localstack:4566
      IAM: http://localstack:4566
      CloudFormation: http://localstack:4566
      CloudWatchLogs: http://localstack:4566
Enter fullscreen mode Exit fullscreen mode

Deploy to LocalStack:

npx serverless deploy --stage local
Enter fullscreen mode Exit fullscreen mode

14. Troubleshooting Common Issues

"Could not connect to the endpoint URL"

# Check if LocalStack is running
docker ps | grep localstack

# Check health
curl http://localhost:4566/_localstack/health

# Check logs
docker logs localstack
Enter fullscreen mode Exit fullscreen mode

S3 "bucket does not exist" with valid bucket name

You're probably missing forcePathStyle:

// WRONG — SDK uses virtual-hosted style: http://bucket.localhost:4566
const s3 = new S3Client({ endpoint: 'http://localhost:4566' });

// CORRECT — SDK uses path style: http://localhost:4566/bucket
const s3 = new S3Client({
  endpoint: 'http://localhost:4566',
  forcePathStyle: true,  // ← THIS
});
Enter fullscreen mode Exit fullscreen mode

Lambda "Docker not available" error

Mount the Docker socket in your docker-compose.yml:

volumes:
  - /var/run/docker.sock:/var/run/docker.sock
Enter fullscreen mode Exit fullscreen mode

Lambda can't reach other Docker services

Lambda containers run in separate Docker networks. If your Lambda needs to reach a database or API in another container, ensure they share a Docker network:

# docker-compose.yml
services:
  localstack:
    networks:
      - backend

  my-database:
    networks:
      - backend

networks:
  backend:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

SQS URL format issues

LocalStack SQS URLs can vary. Always use the format:

http://localstack:4566/000000000000/queue-name  # Inside Docker
http://localhost:4566/000000000000/queue-name    # From host machine
Enter fullscreen mode Exit fullscreen mode

Data doesn't persist after restart

Set PERSISTENCE=1 and use a named volume:

environment:
  PERSISTENCE: "1"
volumes:
  - localstack-data:/var/lib/localstack
Enter fullscreen mode Exit fullscreen mode

Slow Kinesis performance

Enable the Scala engine:

environment:
  KINESIS_MOCK_PROVIDER_ENGINE: scala
Enter fullscreen mode Exit fullscreen mode

15. Free vs Pro — What You Actually Need

Feature Community (Free) Pro
S3, SQS, SNS, Kinesis Yes Yes
Lambda, IAM, CloudWatch Yes Yes
DynamoDB, SSM, EventBridge Yes Yes
Persistence Yes Yes
IAM policy enforcement No Yes
Cognito, RDS, ECS, EKS No Yes
Cloud Pods (state snapshots) No Yes
CI analytics dashboard No Yes

My recommendation: Start with Community. It covers 90% of typical use cases. Upgrade to Pro only if you need a specific service (like RDS or Cognito) or IAM enforcement.


16. Conclusion

What We Built

┌──────────────┐     ┌─────────────┐     ┌──────────────────┐     ┌──────────────┐
│  Application  │────▶│   Kinesis    │────▶│  Lambda          │────▶│   S3 Bucket   │
│  (Events)     │     │   Stream     │     │  (Processor)     │     │  (Storage)    │
└──────────────┘     └─────────────┘     └──────────────────┘     └──────────────┘
                                                  │                       │
                                                  ▼                       ▼
                                          ┌──────────────┐     ┌──────────────────┐
                                          │   SQS Queue   │     │  Frontend / API   │
                                          │ + Dead Letter  │     │  (reads from S3) │
                                          └──────┬───────┘     └──────────────────┘
                                                  │
                                                  ▼
                                          ┌──────────────────┐
                                          │  Lambda           │
                                          │  (Notification)   │
                                          └──────────────────┘
                                                  │
                                          ┌───────▼──────────┐
                                          │   CloudWatch      │
                                          │   (All Logs)      │
                                          └──────────────────┘
Enter fullscreen mode Exit fullscreen mode

We covered:

  • Installation on Mac, Windows, and Linux
  • Individual service setup — S3, SQS, Kinesis, Lambda, IAM, CloudWatch, SSM
  • A complete pipeline — events flow from Kinesis through Lambda into S3 and SQS
  • SDK integration — TypeScript patterns that work against both LocalStack and real AWS
  • Docker Compose — a production-grade setup with health checks, init scripts, and persistence
  • Advanced patterns — aliases, EventBridge cron, failure destinations, Serverless Framework
  • Troubleshooting — the issues you will actually hit

The key mindset shift: treat LocalStack as your local AWS account. Same APIs, same SDKs, same CLI — just no bills and instant feedback.


Have questions? Drop them in the comments — I'll do my best to answer. If this saved you time or money, a like helps others find it too.


All code examples in this article are available as standalone scripts you can copy-paste and run. No AWS account required.

Top comments (0)