DEV Community

Haowen Huang
Haowen Huang

Posted on

Deploying OpenClaw on AWS EC2 - A Developer's Perspective

What is OpenClaw?

OpenClaw is a personal AI assistant that you can deploy on your own infrastructure. It can respond to you through channels you already use — WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, WeChat, Lark, and 20+ other platforms. It can have voice conversations on macOS/iOS/Android, and render a real-time Canvas you can control. The Gateway is just the control plane — the product itself is the assistant.

If you want a self-hosted, fast, always-on single-user personal assistant, OpenClaw is it.

Why Run Your Own AI Assistant on AWS?

Most people interact with AI through hosted services. That works — until you care about:

  • Privacy: Your conversations, code, and data stay in your AWS account
  • Cost control: Pay only for Bedrock API usage, no per-seat SaaS fees
  • Customization: Full control over models, plugins, and integrations
  • Security: No open ports, no public endpoints, SSM-only access

By deploying OpenClaw on AWS with Amazon Bedrock, you can use the latest Claude models (Opus 4.6, Sonnet 4.6) as your personal AI assistant — not just for coding, but for automation, messaging, device control, and more.

I decided to try it out myself. Here's how it went.

Architecture: What You're Deploying

🚀 Quick Start: If you're an experienced developer, you can use the deployment template directly:

curl -O https://raw.githubusercontent.com/hanyun2019/openclaw-on-aws/main/openclaw-deployment.yaml

aws cloudformation create-stack \
  --stack-name openclaw \
  --template-body file://openclaw-deployment.yaml \
  --capabilities CAPABILITY_IAM \
  --region us-east-1

📦 Template URL: github.com/hanyun2019/openclaw-on-aws

⚠️ Non-US Region Users: Additional configuration is required after deployment. See the Non-US Region Deployment Guide below.

Before we dive in, here's what the CloudFormation template sets up:

OpenClaw on AWS Architecture

User Access: Connect via SSM Session Manager to establish a secure tunnel, forwarding local localhost:18789 to the EC2 instance — no inbound ports required.

Key design decisions:

  • EC2 in the public subnet — for initial setup (package downloads via Internet Gateway)
  • VPC Endpoints in the private subnet — ensures Bedrock API calls never leave the AWS network
  • No inbound ports open — all access through SSM port forwarding
  • Graviton (ARM) instances — better price-performance ratio

Prerequisites

Before you start, make sure you have:

  1. An AWS account with billing enabled
  2. Bedrock model access: Go to the Bedrock console → Model access → Request access to Claude models in your target region
  3. IAM permissions: Ability to create CloudFormation, EC2, IAM, and VPC resources
  4. Local tools installed:

US Region Deployment Guide

If you're deploying in us-east-1 or us-west-2, it's genuinely a one-click experience.

Step 1: Download the Template

curl -O https://raw.githubusercontent.com/hanyun2019/openclaw-on-aws/main/openclaw-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Step 2: Deploy the Stack

aws cloudformation create-stack \
  --stack-name openclaw \
  --template-body file://openclaw-deployment.yaml \
  --capabilities CAPABILITY_IAM \
  --region us-east-1
Enter fullscreen mode Exit fullscreen mode

Optional parameters:

Parameter Default Description
InstanceType t4g.large Graviton instance type (2 vCPU, 8GB RAM)
KeyPairName (none) EC2 key pair for SSH access (optional)
CreateVPCEndpoints true Create VPC endpoints for private Bedrock access
AllowedSSHCIDR (empty) CIDR block for SSH access (optional)

For example, to use a larger instance:

aws cloudformation create-stack \
  --stack-name openclaw \
  --template-body file://openclaw-deployment.yaml \
  --capabilities CAPABILITY_IAM \
  --region us-east-1 \
  --parameters ParameterKey=InstanceType,ParameterValue=t4g.xlarge
Enter fullscreen mode Exit fullscreen mode

Step 3: Wait (~10-15 minutes)

aws cloudformation wait stack-create-complete \
  --stack-name openclaw --region us-east-1
Enter fullscreen mode Exit fullscreen mode

Go grab a coffee. ☕

Step 4: Get Your Access Info

aws cloudformation describe-stacks \
  --stack-name openclaw \
  --query 'Stacks[0].Outputs' \
  --region us-east-1
Enter fullscreen mode Exit fullscreen mode

This gives you the instance ID and access URL.

Step 5: Connect via SSM Port Forwarding

# Get Instance ID
INSTANCE_ID=$(aws cloudformation describe-stacks \
  --stack-name openclaw \
  --query 'Stacks[0].Outputs[?OutputKey==`InstanceId`].OutputValue' \
  --output text --region us-east-1)

# Start the tunnel
aws ssm start-session \
  --target $INSTANCE_ID \
  --region us-east-1 \
  --document-name AWS-StartPortForwardingSession \
  --parameters '{"portNumber":["18789"],"localPortNumber":["18789"]}'
Enter fullscreen mode Exit fullscreen mode

Step 6: Open OpenClaw

Navigate to the Step3AccessURL from the stack outputs:

http://localhost:18789/?token=<your-token>
Enter fullscreen mode Exit fullscreen mode

That's it. You're in. 🎉

OpenClaw Admin Page

OpenClaw Chat Interface


Non-US Region Deployment Guide

If you're deploying outside the US (e.g., Sydney ap-southeast-2), you need to modify the model IDs.

Note: If you're deploying in us-east-1 or us-west-2, you can skip this section. The template's default us. prefixed model IDs work out of the box in US regions.

The Issue

The template defaults to us.anthropic.claude-opus-4-6-v1 — this is the US region Inference Profile. In other regions, you need to use the corresponding regional prefix.

Solution Steps

1. Connect to the Instance via SSM

aws ssm start-session --target <instance-id> --region ap-southeast-2
Enter fullscreen mode Exit fullscreen mode

2. Query Available Inference Profiles (Optional)

aws bedrock list-inference-profiles --region ap-southeast-2 \
  --query 'inferenceProfileSummaries[*].inferenceProfileId' \
  --output table
Enter fullscreen mode Exit fullscreen mode

3. Update OpenClaw Configuration

nano /home/ubuntu/.openclaw/openclaw.json
Enter fullscreen mode Exit fullscreen mode

Replace the us. prefix in model IDs with your region's prefix:

{
  "models": {
    "providers": {
      "amazon-bedrock": {
        "models": [
          {
            "id": "au.anthropic.claude-opus-4-6-v1",
            "name": "Claude Opus 4.6"
          },
          {
            "id": "au.anthropic.claude-sonnet-4-6",
            "name": "Claude Sonnet 4.6"
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "amazon-bedrock/au.anthropic.claude-opus-4-6-v1"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

nano save: Ctrl+O → Enter to confirm → Ctrl+X to exit

4. Restart the Gateway

systemctl --user restart openclaw-gateway.service
Enter fullscreen mode Exit fullscreen mode

Done! 🎉


Understanding Inference Profile Prefixes

Here's a quick reference:

Prefix Scope Example Regions
us. US cross-region us-east-1, us-east-2, us-west-2, ca-central-1
eu. Europe cross-region eu-central-1, eu-west-1, eu-west-3, eu-north-1
apac. Asia-Pacific cross-region ap-northeast-1, ap-southeast-1, ap-southeast-2, ap-south-1
au. Australia ap-southeast-2, ap-southeast-4
jp. Japan ap-northeast-1, ap-northeast-3
global. Global cross-region (routes to optimal commercial region) Can be called from all commercial regions

Rule of thumb: Use the most specific prefix for your deployment region.

  • Deploying in us-east-1 or us-west-2? The default us.anthropic.claude-opus-4-6-v1 just works. No changes needed.
  • Deploying in ap-southeast-2 (Sydney)? Use au.anthropic.claude-opus-4-6-v1.
  • Deploying in ap-northeast-1 (Tokyo)? Use jp.anthropic.claude-sonnet-4-6 or apac. prefix.
  • Deploying in eu-west-1 (Ireland)? Use eu.anthropic.claude-opus-4-6-v1.
  • Not sure? Query aws bedrock list-inference-profiles in your region to see what's available.

Note: Model availability varies by region. For example, Claude Opus 4.6 in Japan may only be available with the global. prefix, while Sonnet 4.6 has a jp. prefix. Use aws bedrock list-inference-profiles to check actual available profiles in your region.


Cost Estimate

Resource Spec Est. Monthly Cost Calculation
EC2 (t4g.large) 2 vCPU, 8GB RAM ~$49 $0.0672/hr × 730 hrs
EBS (gp3) 50GB ~$4 $0.08/GB-mo × 50GB
VPC Endpoints (×4) Interface type ~$29 $0.01/hr × 730 hrs × 4 endpoints
Bedrock API Usage-based Variable
Total (excl. API) ~$82

Pricing References (us-east-1 region, 2024):

Note: Actual prices may vary by region and time. Please refer to official AWS pricing pages for current rates.

Cost-saving tip: Set CreateVPCEndpoints=false to save ~$29/month. In this case, Bedrock API calls will route through the Internet Gateway to Bedrock's public endpoints. Traffic is still encrypted via TLS, but traverses the public internet rather than staying within the AWS backbone. For scenarios with strict security or compliance requirements, keep VPC endpoints enabled to ensure traffic never leaves the AWS network.


FAQ

How do I access EC2 via SSM and use OpenClaw TUI?

Option 1: Via AWS Console

  1. Open the EC2 Console
  2. Select your OpenClaw instance
  3. Click Connect → Select Session Manager → Click Connect

Connect to EC2 via AWS Console SSM

Option 2: Via AWS CLI

aws ssm start-session --target <instance-id> --region <your-region>
Enter fullscreen mode Exit fullscreen mode

After connecting, switch to the ubuntu user and launch TUI:

sudo su - ubuntu
openclaw tui
Enter fullscreen mode Exit fullscreen mode

How do I configure OpenClaw and switch Bedrock models?

Use the openclaw config command for configuration management, including switching models on Amazon Bedrock:

openclaw config
Enter fullscreen mode Exit fullscreen mode

OpenClaw Config - Switch Bedrock Models

Stack deployment timed out (WaitCondition failure)?

Connect to the instance via SSM and check the setup log:

cat /var/log/openclaw-setup.log
Enter fullscreen mode Exit fullscreen mode

Common causes:

  • Network issues during package downloads
  • Bedrock model access not yet approved (it can take a few minutes after requesting)

How do I retrieve my Gateway token?

# Replace <stack-name> with your stack name
aws ssm get-parameter \
  --name "/openclaw/<stack-name>/gateway-token" \
  --with-decryption \
  --query 'Parameter.Value' \
  --output text
Enter fullscreen mode Exit fullscreen mode

How do I switch models?

Edit agents.defaults.model.primary in /home/ubuntu/.openclaw/openclaw.json, then restart:

systemctl --user restart openclaw-gateway.service
Enter fullscreen mode Exit fullscreen mode

How do I update OpenClaw?

SSH or SSM into the instance and run:

npm update -g openclaw
systemctl --user restart openclaw-gateway.service
Enter fullscreen mode Exit fullscreen mode

How do I tear it all down?

aws cloudformation delete-stack --stack-name openclaw --region us-east-1
Enter fullscreen mode Exit fullscreen mode

This removes everything — EC2 instance, VPC, IAM roles, all of it. Clean.


Wrapping Up

Deploying OpenClaw on AWS is straightforward — with one note about region selection:

  • US regions (us-east-1, us-west-2): Truly one-click. The template works out of the box.
  • Other regions: You need to replace the model IDs with the corresponding regional inference profiles.

Either way, the CloudFormation template handles 90% of the work. What you get at the end is a private, secure AI assistant running on your own infrastructure — no data leaving your AWS account, no third-party services in the loop, and the full power of Claude at your fingertips.

Resources:


Series Preview: Four Ways to Deploy OpenClaw on AWS

This post is the first in the OpenClaw on AWS series. There are four ways to deploy OpenClaw on AWS, each suited for different scenarios:

Deployment Method Use Case Status
EC2 Deployment Full control, custom configuration, persistent runtime ✅ This post
Lightsail Deployment Simple and fast, fixed monthly cost, beginner-friendly 📝 Coming soon
AgentCore Deployment Managed service, no infrastructure management 📝 Coming soon
EKS Deployment Containerized, scalable, enterprise-grade 📝 Coming soon

Stay tuned for upcoming posts!


Industry Trends: The Rise of AI Agents

Beyond OpenClaw, the AI Agent space is evolving rapidly. More developers are exploring how to build autonomous, controllable AI agent systems.

One project worth watching is Hermes Agent — an open-source AI Agent framework developed by Nous Research that has been gaining increasing attention in the developer community. It offers a different approach to building and deploying AI agents.

I'll share hands-on experience with Hermes Agent in upcoming blog posts — stay tuned.

Happy Deploying! 🦞


References

Top comments (0)