In the last post, we broke the news: AWS App Runner is sunsetting. No new customers after April 30, 2026. If you were one of the developers who loved App Runner's simplicity, you probably felt a pang of dread. Where do you go now?
The answer is ECS Express Mode and after using it, we think it's actually better than App Runner ever was.
This guide walks you through deploying your first containerized web application with ECS Express Mode from scratch. By the end, you'll have a running service with an auto-provisioned load balancer, auto-scaling, HTTPS, and monitoring — all in under 10 minutes.
What Is ECS Express Mode?
ECS Express Mode is a new capability within Amazon ECS (Elastic Container Service) that dramatically simplifies container deployment. Think of it as the spiritual successor to AWS App Runner — but built directly into ECS, giving you both simplicity and the full power of the ECS ecosystem.
Here's what Express Mode does automatically when you deploy:
| What It Provisions | What You'd Otherwise Need to Do |
|---|---|
| ECS Cluster | Create a cluster with Fargate capacity providers |
| ECS Service | Define service configuration, deployment strategy |
| Application Load Balancer | Create ALB, listener, target group, health check |
| Security Groups | Configure inbound/outbound rules for ALB and tasks |
| Auto-Scaling | Set up Application Auto Scaling targets and policies |
| CloudWatch Monitoring | Configure log groups, metrics, alarms |
| Networking | Set up VPC subnets, route tables, internet gateway |
That's 6–8 AWS console pages reduced to a single form. This is not a wrapper with limited features — Express Mode creates real, standard ECS resources that you can inspect, modify, and extend at any time.
The Key Differentiator: Shared ALBs
One of the most impactful features is shared Application Load Balancers. In standard ECS, every service typically needs its own ALB — that's ~$16/month per service, just for the load balancer.
ECS Express Mode can share a single ALB across up to 25 services using path-based or host-based routing. For a startup running 5 services, that's a savings of ~$64/month just in ALB costs.
ECS Express Mode vs Standard ECS: When to Use What
| Feature | Express Mode | Standard ECS |
|---|---|---|
| Setup time | 5–10 minutes | 30–60 minutes |
| ALB provisioning | Automatic (shared) | Manual (dedicated) |
| Security groups | Auto-configured | Manual configuration |
| Auto-scaling | Pre-configured policies | Manual setup |
| CloudWatch | Auto-enabled | Manual log groups/metrics |
| VPC networking | Uses defaults or existing | Full manual control |
| Customizability | Moderate (can modify after creation) | Full |
| Pricing | Same Fargate + ALB pricing | Same Fargate + ALB pricing |
| Best for | Most web services, APIs, backends | Custom networking, multi-AZ, advanced configs |
Our recommendation: Start with Express Mode for everything. If you hit a limitation, you can always "eject" to standard ECS by modifying the underlying resources directly. You're not locked in.
Prerequisites
Before you start, make sure you have:
1. An AWS Account
If you don't have one, create a free account. New accounts get 12 months of free tier, which includes enough ECS/Fargate credits to run this tutorial at zero cost.
2. AWS CLI Installed (Optional, for CLI Path)
# macOS
brew install awscli
# Verify
aws --version
# aws-cli/2.x.x ...
3. Docker Installed (for Building Images)
# macOS
brew install --cask docker
# Verify
docker --version
# Docker version 27.x.x ...
4. A Container Image
You need a Docker image to deploy. You can either:
-
Use a public image for testing (we'll use
public.ecr.aws/docker/library/nginx:latest) - Build your own from your app's Dockerfile
For this guide, we'll show both paths.
The Deployment Flow
Here's the complete path from code to running service:
Step 1: Prepare Your Container Image
Option A: Use a Public Image (Quickest)
Skip ahead to Step 2 and use public.ecr.aws/docker/library/nginx:latest as your image URI. This is the fastest way to test Express Mode.
Option B: Build and Push Your Own Image
Let's say you have a simple Node.js API. First, create a Dockerfile:
# Use official Node.js LTS image
FROM node:20-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies (production only)
RUN npm ci --only=production
# Copy application code
COPY . .
# Expose the port your app runs on
EXPOSE 3000
# Health check (Express Mode uses this)
HEALTHCHECK --interval=30s --timeout=3s \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
# Start the application
CMD ["node", "server.js"]
Now build and push to Amazon ECR:
# Set your variables
AWS_REGION=us-east-1
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
REPO_NAME=my-web-app
# Create ECR repository (if it doesn't exist)
aws ecr create-repository \
--repository-name $REPO_NAME \
--region $AWS_REGION \
--image-scanning-configuration scanOnPush=true
# Authenticate Docker with ECR
aws ecr get-login-password --region $AWS_REGION | \
docker login --username AWS --password-stdin \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
# Build the image
docker build -t $REPO_NAME .
# Tag the image
docker tag $REPO_NAME:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:latest
# Push to ECR
docker push \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:latest
💡 Tip: If you're on an Apple Silicon Mac (M1/M2/M3), add
--platform linux/amd64to yourdocker buildcommand to ensure compatibility with Fargate's x86 environment. Or uselinux/arm64and select ARM/Graviton in Express Mode for 20% cost savings.
Save the full image URI — you'll need it in the next step:
<your-account-id>.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest
Step 2: Open the ECS Console and Select Express Mode
- Go to the Amazon ECS Console
- In the left navigation panel, click Express mode
- Click Create
You'll see a simplified form — much less intimidating than the standard ECS service creation page. This is by design.
Step 3: Configure Your Service
Fill in the Express Mode creation form:
Basic Configuration:
| Field | Value | Notes |
|---|---|---|
| Service name | my-web-app |
Lowercase, hyphens allowed |
| Container image URI | Your ECR URI or a public image | e.g., public.ecr.aws/docker/library/nginx:latest
|
| Port |
3000 (or 80 for nginx) |
The port your app listens on |
| CPU | 0.25 vCPU |
Start small; you can scale later |
| Memory | 0.5 GB |
Start with 512 MB for most web apps |
Environment Variables (optional):
Click "Add environment variable" to add any your app needs:
NODE_ENV=production
DATABASE_URL=your-connection-string
API_KEY=your-key
⚠️ Security note: For sensitive values like database credentials and API keys, use AWS Secrets Manager or SSM Parameter Store instead of plain-text environment variables. Express Mode supports both through the "valueFrom" syntax.
Scaling Configuration:
| Field | Value | Notes |
|---|---|---|
| Desired count | 1 |
Start with 1 task for testing |
| Min tasks | 1 |
Minimum for auto-scaling |
| Max tasks | 4 |
Maximum for auto-scaling |
Step 4: Create and Wait
Click Create. Express Mode will now:
- ✅ Create an ECS cluster (if one doesn't exist)
- ✅ Register a task definition with your image and configuration
- ✅ Provision (or reuse) an Application Load Balancer
- ✅ Create a target group and listener rules
- ✅ Configure security groups for both ALB and tasks
- ✅ Set up CloudWatch log group and Container Insights
- ✅ Launch your Fargate task(s)
- ✅ Configure auto-scaling policies
This takes about 5–6 minutes. You'll see the status go from "Provisioning" → "Running."
Step 5: Access Your Application
Once the status shows Running:
- In the ECS console, click on your service
- Go to the Networking tab
- Find the Load balancer DNS name — it'll look like:
my-web-app-alb-123456789.us-east-1.elb.amazonaws.com
- Open that URL in your browser — your app is live! 🎉
To add a custom domain: Create a CNAME record in your DNS provider pointing your domain to the ALB DNS name. For HTTPS, attach an AWS Certificate Manager (ACM) certificate to the ALB listener.
What Just Happened Under the Hood?
Express Mode created real, standard AWS resources. Let's peek behind the curtain:
Your AWS Account
├── ECS Cluster: "default" (or new)
│ └── Service: "my-web-app"
│ └── Task: running on Fargate
│ └── Container: your image
├── Application Load Balancer (shared)
│ ├── Listener (port 80/443)
│ └── Target Group → your tasks
├── Security Groups
│ ├── ALB SG (allows 80/443 inbound)
│ └── Task SG (allows traffic from ALB SG only)
├── CloudWatch
│ ├── Log Group: /ecs/my-web-app
│ └── Container Insights: enabled
├── IAM Roles
│ ├── Task Execution Role (pulls images, writes logs)
│ └── Task Role (your app's AWS permissions)
└── Auto Scaling
├── Target: ECS service
└── Policy: target tracking (CPU/memory-based)
Every one of these resources is visible in your AWS console. You can modify, extend, or even detach them from Express Mode if you need full control. This is the fundamental difference from App Runner — you're not using a black box.
Deploying Updates
After your initial deployment, updating your app is straightforward:
Via Console
- Build and push a new image to ECR with a new tag (or
:latest) - Go to your ECS service in the console
- Click Update
- Select the new task definition revision (or force a new deployment if using
:latest) - ECS performs a rolling update — no downtime
Via CLI (Faster)
# Force a new deployment (re-pulls :latest image)
aws ecs update-service \
--cluster default \
--service my-web-app \
--force-new-deployment
Via GitHub Actions (Best for Production)
Here's a minimal GitHub Actions workflow that deploys on every push to main:
name: Deploy to ECS Express
on:
push:
branches: [main]
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-deploy
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/my-web-app:$IMAGE_TAG .
docker push $ECR_REGISTRY/my-web-app:$IMAGE_TAG
- name: Update ECS service
run: |
aws ecs update-service \
--cluster default \
--service my-web-app \
--force-new-deployment
💡 Pro tip: Use OIDC (OpenID Connect) instead of storing AWS access keys in GitHub Secrets. It's more secure and is the recommended approach in 2026. The
role-to-assumeparameter above uses OIDC.
Cost Breakdown: What You'll Actually Pay
ECS Express Mode doesn't add any surcharge — you pay standard AWS pricing:
For the Setup in This Tutorial (0.25 vCPU, 0.5 GB, 1 Task)
Fargate Compute:
vCPU: 0.25 × $0.000011244/sec × 2,592,000 sec/month = $7.29/mo
Memory: 0.5 × $0.000001235/sec × 2,592,000 sec/month = $1.60/mo
Subtotal: $8.89/mo
Application Load Balancer:
Hourly: $0.0225/hr × 720 hrs = $16.20/mo
LCU: ~$1–3/mo (minimal traffic)
Subtotal: ~$17–19/mo
CloudWatch Logs: ~$0.50–1/mo
Total: ~$27–29/month
Compare That To...
| Platform | Same Workload (0.25 vCPU, 0.5 GB) | Notes |
|---|---|---|
| ECS Express Mode | ~$27–29/mo | Full AWS ownership |
| Railway | ~$16–21/mo | Usage-based, no ALB cost |
| Render | ~$7–15/mo | Hobby tier, limited |
| Vercel | ~$25–35/mo | Pro plan + usage |
At this scale, PaaS platforms are competitive. But as we showed in last post, the economics flip dramatically at growth stage. Express Mode's shared ALB feature (splitting that $16/mo across 5+ services) is what makes it cost-competitive even at small scale.
Common Issues and Troubleshooting
❌ "Service is in DRAINING state"
Your container is failing health checks. Check:
-
CloudWatch Logs → Look for crash errors in
/ecs/my-web-app - Health check path → Ensure your app responds on the configured health check endpoint
- Port mapping → The port in Express Mode must match what your app actually listens on
❌ "Unable to pull image"
Usually an ECR permissions issue:
# Verify the image exists
aws ecr describe-images \
--repository-name my-web-app \
--region us-east-1
# Verify the task execution role has ECR pull permissions
aws iam get-role --role-name ecsTaskExecutionRole
❌ "Service stuck at 0 running tasks"
Check:
- Subnet configuration — Fargate tasks need subnets with internet access (public subnet with IGW, or private subnet with NAT Gateway)
- Security group — Ensure the task security group allows outbound traffic to pull the container image
- Resource limits — You might have hit Fargate limits in your region. Request a limit increase.
Advanced: Migrating from Express Mode to Standard ECS
If you outgrow Express Mode's defaults, you can "eject" to full control:
- Note all the resources Express Mode created (cluster, service, task def, ALB)
- Modify them individually through the standard ECS console or CLI
- Express Mode doesn't prevent you from customizing — it just provides defaults
Common reasons to eject:
- Need multi-AZ load balancer configuration
- Need VPC peering or PrivateLink
- Need EFS volumes or GPU instances
- Need blue/green deployments with CodeDeploy
- Need custom placement strategies (e.g., spread across AZs)
What's Next?
You've successfully deployed a containerized app with ECS Express Mode. Here's what to tackle next:
- Add a custom domain and HTTPS — Use Route 53 + ACM to add SSL
- Set up CI/CD — We'll cover this in detail in How to Set Up a CI/CD Pipeline with GitHub Actions and AWS ECS
- Add a database — Connect to RDS or DynamoDB
- Configure auto-scaling — Fine-tune your scaling policies based on actual traffic
TL;DR
| What | Details |
|---|---|
| What it is | Simplified ECS deployment that auto-provisions ALB, security groups, scaling, and monitoring |
| Who it's for | Developers who want App Runner simplicity with ECS power |
| Setup time | 5–10 minutes |
| Cost | Standard Fargate + ALB pricing (~$27/mo for smallest config) |
| Key advantage | Shared ALBs (save ~$16/mo per additional service) |
| Lock-in risk | Zero — creates standard AWS resources you fully own |
| Production-ready? | Yes, with CI/CD and proper monitoring |
Even simpler? If you want to skip the ECR push, CLI commands, and console clicking entirely — TurboDeploy handles all of this in a single
git push. Same ECS Fargate infrastructure, same AWS pricing, zero AWS console required.



Top comments (0)