<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sachin Shah</title>
    <description>The latest articles on DEV Community by Sachin Shah (@imsachinshah).</description>
    <link>https://dev.to/imsachinshah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/imsachinshah"/>
    <language>en</language>
    <item>
      <title>Integrating OpenClaw with Google Cloud: A Solo Developer's Guide</title>
      <dc:creator>Sachin Shah</dc:creator>
      <pubDate>Fri, 17 Apr 2026 03:39:00 +0000</pubDate>
      <link>https://dev.to/imsachinshah/integrating-openclaw-with-google-cloud-a-solo-developers-guide-477c</link>
      <guid>https://dev.to/imsachinshah/integrating-openclaw-with-google-cloud-a-solo-developers-guide-477c</guid>
      <description>&lt;p&gt;Running an AI personal assistant from your laptop sounds great until the lid closes, the Wi-Fi drops, or your machine restarts. If you want a gateway that stays online 24/7, reachable from any device, you need to move it off your local hardware and onto something persistent.&lt;/p&gt;

&lt;p&gt;This guide walks you through deploying OpenClaw on a GCP Compute Engine VM using Docker, from zero to a running gateway you can access over an SSH tunnel.&lt;/p&gt;

&lt;p&gt;The target reader is a solo developer who wants a private, always-on AI assistant without managing Kubernetes or paying for a managed platform. The total infrastructure cost is roughly &lt;strong&gt;$12-25/month&lt;/strong&gt; for the VM, plus whatever you spend on model API calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It Matters
&lt;/h2&gt;

&lt;p&gt;A personal AI assistant is only useful if it is always reachable. Running the OpenClaw gateway on your laptop means it goes down when you sleep your machine, switch networks, or reboot for updates. You also lose access from your phone, tablet, or any other device that is not your development machine.&lt;/p&gt;

&lt;p&gt;The economics favor self-hosting. Commercial AI agent platforms charge per resolution or per seat. OpenClaw separates infrastructure cost from model cost: you pay for compute (the VM) and API calls (the model provider), with zero per-message platform fees.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93jsgzymawx1j3mlryx4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93jsgzymawx1j3mlryx4.png" alt="estimated annual cost by hosting approach" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a solo developer handling personal tasks, model costs at $0.01-$0.05 per message put annual spending well under $1,000, often under $300. The VM adds $12-25/month depending on the machine type you choose. That is the full cost of an always-on, private AI assistant.&lt;/p&gt;

&lt;p&gt;The Compute Engine and Docker pattern solves the core problem: your assistant runs on a persistent VM in &lt;a href="https://docs.openclaw.ai/platforms/gcp" rel="noopener noreferrer"&gt;Google's data center&lt;/a&gt;, survives reboots via Docker's restart policy, and keeps state on host-mounted volumes that outlive container rebuilds.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the System Does?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://openclawlab.com/en/docs/start/openclaw/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; is a self-hosted AI gateway. It sits between your messaging channels (WhatsApp, Telegram, Discord, iMessage, and others) and model providers (Anthropic, OpenAI, OpenRouter, Groq). You configure it once, and it routes conversations from any connected channel through a single agent backed by the model of your choice.&lt;/p&gt;

&lt;p&gt;For the personal assistant use case, the setup looks like this: one dedicated phone number or bot account acts as your always-on agent. You message it from your real phone, and the OpenClaw gateway processes the message, calls the model API, and sends back a response. The gateway also exposes a Control UI (a web dashboard) where you manage settings, approve devices, and monitor conversations.&lt;/p&gt;

&lt;p&gt;The gateway runs as a single Node.js process inside a Docker container. It reads configuration from &lt;code&gt;~/.openclaw/openclaw.json&lt;/code&gt;, persists workspace data to &lt;code&gt;~/.openclaw/workspace&lt;/code&gt;, and binds a WebSocket server on port 18789. When deployed on a VM, this process stays up indefinitely thanks to Docker Compose's restart: unless-stopped policy.&lt;/p&gt;

&lt;p&gt;Beyond the core gateway, OpenClaw has an extensible plugin system. ClawdHub offers 200+ skills covering common workflow tasks. Persistent memory via Mem0, WhatsApp/Telegram channel pairing, and Gmail OAuth are all available as configuration layers on top of the base gateway. This guide focuses strictly on getting the gateway running on GCP; those extensions are covered briefly in "Where to Go Next" section.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The deployment has five layers, from your laptop to the model providers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your laptop runs a browser pointed at &lt;code&gt;localhost:18789&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gcloud compute ssh&lt;/code&gt; with port forwarding creates an encrypted tunnel from your laptop to the VM&lt;/li&gt;
&lt;li&gt;GCP Compute Engine VM (Debian 12, e2-small) hosts Docker&lt;/li&gt;
&lt;li&gt;Docker Compose runs the openclaw-gateway container, bound to &lt;code&gt;127.0.0.1:18789&lt;/code&gt; on the VM&lt;/li&gt;
&lt;li&gt;The gateway makes outbound HTTPS calls to model provider APIs (OpenRouter, Anthropic, Groq, etc.) and optionally to ClawdHub for plugin management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The gateway port is bound to loopback only on the VM. No firewall rules need to be opened for inbound traffic beyond SSH (port 22). All access to the Control UI goes through the SSH tunnel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmo52ltdxre3v26bh4ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmo52ltdxre3v26bh4ty.png" alt="architechture for openclaw" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Runtime Request Flow&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The key security property of this architecture: the gateway never listens on a public interface. The &lt;code&gt;127.0.0.1:&lt;/code&gt; prefix in the Docker Compose ports directive ensures the gateway is only reachable from the VM's loopback. Your SSH tunnel bridges the gap between your laptop and that loopback port. Google recommends this port-forwarding-over-SSH pattern as a primary method for securing services on VMs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Setup: Prerequisites&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before starting, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GCP account (free tier is eligible for e2-micro, but you will want e2-small or e2-medium for this deployment)&lt;/li&gt;
&lt;li&gt;gcloud CLI installed on your local machine (&lt;a href="https://cloud.google.com/sdk/docs/install" rel="noopener noreferrer"&gt;install guide&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;SSH access from your laptop to GCP VMs (handled automatically by &lt;code&gt;gcloud compute ssh&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Basic familiarity with Docker and Docker Compose&lt;/li&gt;
&lt;li&gt;An API key for at least one model provider (OpenRouter, Anthropic, Groq, etc.)&lt;/li&gt;
&lt;li&gt;Roughly 20-30 minutes end to end&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Choosing a VM Machine Type&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This decision matters more than you might expect. The Docker image build (&lt;code&gt;pnpm install --frozen-lockfile&lt;/code&gt;) is memory-intensive, and undersized VMs will OOM-kill the build process with exit code 137.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Specs&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;e2-medium&lt;/td&gt;
&lt;td&gt;2 vCPU, 4GB RAM&lt;/td&gt;
&lt;td&gt;~$25/mo&lt;/td&gt;
&lt;td&gt;Most reliable for local Docker builds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;e2-small&lt;/td&gt;
&lt;td&gt;2 vCPU, 2GB RAM&lt;/td&gt;
&lt;td&gt;~$12/mo&lt;/td&gt;
&lt;td&gt;Minimum recommended for Docker build&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;e2-micro&lt;/td&gt;
&lt;td&gt;2 vCPU (shared), 1GB RAM&lt;/td&gt;
&lt;td&gt;Free tier eligible&lt;/td&gt;
&lt;td&gt;Often fails with Docker build OOM (exit 137)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Source: &lt;a href="https://openclaws.io/docs/install/gcp" rel="noopener noreferrer"&gt;https://openclaws.io/docs/install/gcp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The official OpenClaw Dockerfile now includes &lt;code&gt;NODE_OPTIONS=--max-old-space-size=2048&lt;/code&gt; to cap the Node.js heap during dependency installation, which helps on e2-small instances. But e2-micro remains unreliable for the initial build. If you are budget-sensitive, one strategy is to create an e2-medium for the initial build, then downgrade to e2-small for steady-state operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Install and initialize the gcloud CLI&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The official OpenClaw Dockerfile now includes &lt;code&gt;NODE_OPTIONS=--max-old-space-size=2048&lt;/code&gt; to cap the Node.js heap during dependency installation, which helps on e2-small instances. But e2-micro remains unreliable for the initial build. If you are budget-sensitive, one strategy is to create an e2-medium for the initial build, then downgrade to e2-small for steady-state operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Install and initialize the gcloud CLI
&lt;/h3&gt;

&lt;p&gt;Install the gcloud CLI on your local machine and authenticate so you can script all GCP operations from your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud init

gcloud auth login

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets your default project, region, and credentials. All subsequent gcloud commands in this guide use this authenticated session.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create and configure your GCP project
&lt;/h3&gt;

&lt;p&gt;Create a dedicated project to keep billing, IAM, and logs isolated for your OpenClaw gateway. Then enable the Compute Engine API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud projects create my-openclaw-project &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"OpenClaw Gateway"&lt;/span&gt;

gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project my-openclaw-project

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable billing for the project at &lt;code&gt;https://console.cloud.google.com/billing&lt;/code&gt; (required before you can create VMs). Then enable the Compute Engine API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud services &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;compute.googleapis.com]&lt;span class="o"&gt;(&lt;/span&gt;http://compute.googleapis.com&lt;span class="o"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create the Compute Engine VM
&lt;/h3&gt;

&lt;p&gt;Provision a Debian 12 VM with enough resources to build and run the OpenClaw Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud compute instances create openclaw-gateway 

&lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a 

&lt;span class="nt"&gt;--machine-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;e2-small 

&lt;span class="nt"&gt;--boot-disk-size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;20GB 

&lt;span class="nt"&gt;--image-family&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;debian-12 

&lt;span class="nt"&gt;--image-project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;debian-cloud

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;us-central1-a&lt;/code&gt; zone is one of the most cost-effective regions. The 20GB boot disk is enough for the OS, the OpenClaw repo, the built Docker image, and persistent state directories.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: SSH into the VM
&lt;/h3&gt;

&lt;p&gt;Once the VM is running, SSH in. If the connection is refused immediately after creation, wait 1-2 minutes for SSH key propagation and retry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud compute ssh openclaw-gateway &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Install Docker and basic tooling
&lt;/h3&gt;

&lt;p&gt;Inside the VM, install git, curl, and Docker. Add your user to the docker group so you can run Docker commands without sudo, then log out and back in for the group change to take effect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; git curl ca-certificates

curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://get.docker.com | &lt;span class="nb"&gt;sudo &lt;/span&gt;sh

&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker &lt;span class="nv"&gt;$USER&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log out and SSH back in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;exit&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud compute ssh openclaw-gateway &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify Docker is working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker &lt;span class="nt"&gt;--version&lt;/span&gt;

docker compose version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Clone the OpenClaw repository
&lt;/h3&gt;

&lt;p&gt;Pull the OpenClaw source so you can build a custom Docker image with all required binaries baked in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
git clone https://github.com/openclaw/openclaw.git

&lt;span class="nb"&gt;cd &lt;/span&gt;openclaw

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 7: Create persistent host directories
&lt;/h3&gt;

&lt;p&gt;Docker containers are ephemeral. All long-lived state must live on the host. These directories are mounted into the container via Docker Compose and survive container rebuilds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/.openclaw

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/.openclaw/workspace

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8: Configure the &lt;code&gt;.env&lt;/code&gt; file
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;.env&lt;/code&gt; file in the repository root. This defines the image name, gateway binding, port, and where to mount configuration and workspace directories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;OPENCLAW_IMAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;openclaw:latest

&lt;span class="nv"&gt;OPENCLAW_GATEWAY_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;change-me-now

&lt;span class="nv"&gt;OPENCLAW_GATEWAY_BIND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;lan

&lt;span class="nv"&gt;OPENCLAW_GATEWAY_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;18789

&lt;span class="nv"&gt;OPENCLAW_CONFIG_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/&lt;span class="nv"&gt;$USER&lt;/span&gt;/.openclaw

&lt;span class="nv"&gt;OPENCLAW_WORKSPACE_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/&lt;span class="nv"&gt;$USER&lt;/span&gt;/.openclaw/workspace

&lt;span class="nv"&gt;GOG_KEYRING_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;change-me-now

&lt;span class="nv"&gt;XDG_CONFIG_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/node/.openclaw

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate strong secrets to replace the change-me-now placeholders:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
openssl rand &lt;span class="nt"&gt;-hex&lt;/span&gt; 32

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this command twice: once for &lt;code&gt;OPENCLAW_GATEWAY_TOKEN&lt;/code&gt; and once for &lt;code&gt;GOG_KEYRING_PASSWORD&lt;/code&gt;. Paste the generated values into your &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;A note on bind semantics: OPENCLAW_GATEWAY_BIND=lan tells the gateway inside the container to listen on &lt;code&gt;0.0.0.0&lt;/code&gt; (all interfaces). But the Docker Compose ports directive pins the host-side binding to &lt;code&gt;127.0.0.1&lt;/code&gt;, keeping the gateway loopback-only on the VM. The lan bind is necessary so that Docker's network bridge can route traffic from the host port into the container.&lt;/p&gt;

&lt;p&gt;Lock down the file permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;chmod &lt;/span&gt;600 .env

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 9: Create the Docker Compose configuration
&lt;/h3&gt;

&lt;p&gt;Create &lt;code&gt;docker-compose.yml&lt;/code&gt; in the repository root:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
services:

openclaw-gateway:

image: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_IMAGE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

build: &lt;span class="nb"&gt;.&lt;/span&gt;

restart: unless-stopped

env_file:

- .env

environment:

- &lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/node

- &lt;span class="nv"&gt;NODE_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production

- &lt;span class="nv"&gt;TERM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xterm-256color

- &lt;span class="nv"&gt;OPENCLAW_GATEWAY_BIND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_GATEWAY_BIND&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

- &lt;span class="nv"&gt;OPENCLAW_GATEWAY_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_GATEWAY_PORT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

- &lt;span class="nv"&gt;OPENCLAW_GATEWAY_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_GATEWAY_TOKEN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

- &lt;span class="nv"&gt;GOG_KEYRING_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GOG_KEYRING_PASSWORD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

- &lt;span class="nv"&gt;XDG_CONFIG_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;XDG_CONFIG_HOME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

- &lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/linuxbrew/.linuxbrew/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

volumes:

- &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_CONFIG_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/home/node/.openclaw

- &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_WORKSPACE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:/home/node/.openclaw/workspace

ports:

&lt;span class="c"&gt;# Loopback-only (recommended for VPS)&lt;/span&gt;

- &lt;span class="s2"&gt;"127.0.0.1:&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_GATEWAY_PORT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:18789"&lt;/span&gt;

&lt;span class="c"&gt;# Optional: Canvas host for iOS/Android nodes&lt;/span&gt;

&lt;span class="c"&gt;# - "18790:18790"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three things to understand about this configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;"127.0.0.1:${OPENCLAW_GATEWAY_PORT}:18789"&lt;/code&gt;: The &lt;code&gt;127.0.0.1:&lt;/code&gt; prefix is the primary security control. It means the port is only accessible on the VM's loopback interface, not from the public internet.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;restart: unless-stopped:&lt;/code&gt; The gateway automatically restarts after crashes or VM reboots. It only stays down if you explicitly &lt;code&gt;docker compose stop&lt;/code&gt; it.&lt;/li&gt;
&lt;li&gt;  Volume mounts: &lt;code&gt;${OPENCLAW_CONFIG_DIR}&lt;/code&gt; and &lt;code&gt;${OPENCLAW_WORKSPACE_DIR}&lt;/code&gt; map host directories into the container. Configuration, auth profiles, and workspace data persist across container rebuilds.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 10: Build and launch
&lt;/h3&gt;

&lt;p&gt;Build the Docker image and start the gateway in detached mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Build image&lt;/span&gt;

docker compose build

&lt;span class="c"&gt;# Start gateway&lt;/span&gt;

docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt; openclaw-gateway

&lt;span class="c"&gt;# View logs&lt;/span&gt;

docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt; openclaw-gateway

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The build step takes several minutes on an e2-small instance. If it fails with Killed or exit code 137, your VM is out of memory. See the &lt;a href="https://technical-content-agent.vercel.app/#faqs" rel="noopener noreferrer"&gt;FAQ section&lt;/a&gt; for the fix.&lt;/p&gt;

&lt;p&gt;When the gateway is ready, you will see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="o"&gt;[&lt;/span&gt;gateway] listening on ws://0.0.0.0:18789

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the required binaries are baked into the image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Check binary availability&lt;/span&gt;

docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;openclaw-gateway which gog

docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;openclaw-gateway which goplaces

docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;openclaw-gateway which wacli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
/usr/local/bin/gog

/usr/local/bin/goplaces

/usr/local/bin/wacli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 11: Set allowed origins for the Control UI
&lt;/h3&gt;

&lt;p&gt;When binding to lan, you must explicitly whitelist the Control UI origin, or the browser will be blocked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; openclaw-cli config &lt;span class="nb"&gt;set &lt;/span&gt;gateway.controlUi.allowedOrigins &lt;span class="s1"&gt;'["http://127.0.0.1:18789"]'&lt;/span&gt; &lt;span class="nt"&gt;--strict-json&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 12: Open the SSH tunnel and access the Control UI
&lt;/h3&gt;

&lt;p&gt;From your local machine (not the VM), open an SSH tunnel that forwards your laptop's port 18789 to the VM's loopback port 18789:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud compute ssh openclaw-gateway &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;-NL&lt;/span&gt; 18789:localhost:18789

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The flags break down as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-N&lt;/code&gt;: Do not execute a remote command (tunnel only)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-L 18789:localhost:18789&lt;/code&gt;: Forward local port 18789 to &lt;code&gt;localhost:18789&lt;/code&gt; from the VM's perspective&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this tunnel active, open &lt;code&gt;http://127.0.0.1:18789&lt;/code&gt; in your browser. You will see the OpenClaw Control UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 13: Complete onboarding
&lt;/h3&gt;

&lt;p&gt;In the Control UI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Navigate to Settings and then Token&lt;/li&gt;
&lt;li&gt; Paste the &lt;code&gt;OPENCLAW_GATEWAY_TOKEN&lt;/code&gt; value from your &lt;code&gt;.env&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt; If the UI shows your browser as unauthorized, approve it:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
docker compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; openclaw-cli devices list

docker compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; openclaw-cli devices approve &amp;lt;requestId&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The onboarding wizard will walk you through model provider configuration and any channel setup you want to add. This is interactive and depends on your specific provider keys and channel preferences.&lt;/p&gt;

&lt;p&gt;At this point, you have a running OpenClaw gateway on GCP, accessible from your laptop over an encrypted SSH tunnel.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Worked Well
&lt;/h2&gt;

&lt;p&gt;These observations are derived from the reference documentation and deployment architecture, not from personal production experience.&lt;/p&gt;

&lt;p&gt;Single automated setup script. The docker-setup.sh script handles token generation, directory creation, and idempotent .env writing via its upsert_env() function. If you prefer automation over manual steps, you can run ./docker-setup.sh instead of steps 8-10 above. It generates tokens automatically using openssl rand -hex 32 with a Python fallback:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${OPENCLAW_GATEWAY_TOKEN:-}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;]];&lt;/span&gt; &lt;span class="n"&gt;then&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;command&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="n"&gt;openssl&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;/&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;null&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;amp;&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;then&lt;/span&gt;

    &lt;span class="n"&gt;OPENCLAW_GATEWAY_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$(openssl rand -hex 32)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

  &lt;span class="k"&gt;else&lt;/span&gt;

    &lt;span class="n"&gt;OPENCLAW_GATEWAY_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$(python3 - &amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;

      import secrets

      print(secrets.token_hex(32))

      PY

    )&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Persistent state survives everything. The volume mount pattern (&lt;code&gt;${OPENCLAW_CONFIG_DIR}:/home/node/.openclaw&lt;/code&gt;) means configuration, agent auth profiles, and workspace data live on the host disk, not in the ephemeral container layer. Combined with restart: &lt;code&gt;unless-stopped&lt;/code&gt;, the gateway comes back online with all its state intact after VM reboots, Docker daemon restarts, or container crashes.&lt;/p&gt;

&lt;p&gt;SSH tunnel security is zero-config. No firewall rules need to be opened. No TLS certificates need to be managed. No reverse proxy needs to be configured. The gateway listens only on loopback; all access flows through the encrypted gcloud compute ssh tunnel. This is the GCP-recommended pattern for accessing services on VMs without exposing them to the public internet.&lt;/p&gt;

&lt;p&gt;Docker Compose is the right tool for this workload. A single-host orchestrator with one service, predictable state, and no need for horizontal scaling. You can understand the entire deployment by reading one YAML file. There is no distributed state, no control plane, and no etcd to manage. For a solo developer running a personal assistant on a single VM, this is the correct level of complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and Trade-offs
&lt;/h2&gt;

&lt;p&gt;OOM risk during the initial build. The docker compose build step runs &lt;code&gt;pnpm install --frozen-lockfile&lt;/code&gt;, which is memory-intensive. On e2-micro instances (1GB RAM), the kernel reliably OOM-kills this process with exit code 137. Even e2-small (2GB RAM) can be tight. The Dockerfile now includes &lt;code&gt;NODE_OPTIONS=--max-old-space-size=2048&lt;/code&gt; as a mitigation, but the safest path is to use e2-medium for the first build, then downgrade.&lt;/p&gt;

&lt;p&gt;Single point of failure. One VM, one container, one process. If the VM goes down for maintenance or the zone has an outage, your assistant is offline. Docker Compose can restart a crashed container, but it cannot move workloads to another host. For a personal assistant this is usually acceptable. For anything with uptime requirements, it is not.&lt;/p&gt;

&lt;p&gt;SSH tunnel is a manual step. Every time you want to access the Control UI, you must open the SSH tunnel from your terminal. The tunnel also drops when your terminal session closes or your laptop sleeps. There is no built-in mechanism in this deployment pattern to keep the tunnel persistent. Workarounds exist (e.g., &lt;code&gt;~/.ssh/config&lt;/code&gt; with LocalForward, autossh, or a launchd/systemd unit for the tunnel), but the reference guide does not cover them.&lt;/p&gt;

&lt;p&gt;Setup complexity is real. OpenClaw earns a 3.5/5 for "Ease of Setup" in independent reviews, and a 4.2/5 overall rating qualified with "not recommended as a first AI agent tool for non-technical users." The YAML configuration, environment variable layering, and Docker build process require comfort with the terminal.&lt;/p&gt;

&lt;p&gt;Cloud Run is not an option for this workload. A common question: "Why not Cloud Run instead of a VM?" OpenClaw's gateway is a long-running WebSocket daemon that requires persistent local storage. Cloud Run's file system is ephemeral (writes consume instance memory, data is lost on shutdown), and instances are disposable by design. Both criteria disqualify it from hosting the OpenClaw gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to Go Next
&lt;/h2&gt;

&lt;p&gt;Once your gateway is running, there are several directions to extend the setup.&lt;/p&gt;

&lt;p&gt;Add messaging channels. The gateway supports WhatsApp (via QR pairing), Telegram (via bot token), Discord, iMessage, and more. The personal assistant setup guide covers the "two-phone setup" for WhatsApp, where a second phone acts as the assistant's dedicated number. Configuration goes in &lt;code&gt;~/.openclaw/openclaw.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="nl"&gt;"channels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"whatsapp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"allowFrom"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"+15555550123"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add &lt;a href="https://app.mem0.ai/?utm_source=mem0_blog&amp;amp;utm_medium=blog_ad&amp;amp;utm_campaign=openclaw_gcp&amp;amp;utm_content=openclaw_gcp" rel="noopener noreferrer"&gt;persistent memory with Mem0&lt;/a&gt;. By default, the agent forgets everything between sessions. The &lt;code&gt;@mem0/openclaw-mem0&lt;/code&gt; plugin watches conversations, extracts relevant context, and stores it for retrieval in future sessions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
openclaw plugins &lt;span class="nb"&gt;install&lt;/span&gt; @mem0/openclaw-mem0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push your image to Artifact Registry. Instead of building on the VM every time, build locally (or in CI), push to GCP Artifact Registry, and pull from the VM. Compute Engine can pull containers directly from Artifact Registry repositories with the right IAM permissions.&lt;/p&gt;

&lt;p&gt;Ship logs to &lt;a href="https://docs.docker.com/engine/logging/drivers/gcplogs" rel="noopener noreferrer"&gt;Cloud Logging&lt;/a&gt;. Docker's gcplogs logging driver sends container logs directly to Google Cloud Logging without SSH. When running on a GCP VM, it auto-discovers project metadata from the instance metadata service.&lt;/p&gt;

&lt;p&gt;Remove the public IP entirely with IAP. For a more hardened setup, create the VM without a public IP and use Identity-Aware Proxy (IAP) TCP forwarding for SSH. IAP verifies your Google identity before allowing the connection, eliminating the attack surface of a public IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
gcloud compute ssh openclaw-gateway &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a &lt;span class="nt"&gt;--tunnel-through-iap&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consider GKE if you outgrow a single VM. If you need multi-instance availability, automated rollouts, or are running multiple services alongside the gateway, Google Kubernetes Engine is the natural step up from Compute Engine. But for a personal assistant, a single VM is typically sufficient indefinitely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Compute Engine and Docker pattern gives solo developers an always-on AI assistant for $12-25/month in infrastructure costs plus model API spending. The architecture is simple enough to fit in one YAML file, secure by default (loopback binding and SSH tunnel), and resilient to container crashes (Docker restart policy and host-mounted volumes).&lt;/p&gt;

&lt;p&gt;The main trade-offs are the manual SSH tunnel step, the single point of failure inherent to a single VM, and the OOM risk during the initial Docker build on small instances. These are acceptable trade-offs for a personal assistant workload.&lt;/p&gt;

&lt;p&gt;If your needs grow beyond a single VM, GCP offers a clear upgrade path: Artifact Registry for image management, Cloud Logging for observability, IAP for zero-public-IP access, and GKE for multi-node orchestration. But start here. A single e2-small instance running Docker Compose is the simplest path to a private, persistent AI gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why does my Docker build fail with "Killed" and exit code 137?
&lt;/h3&gt;

&lt;p&gt;Exit code 137 means the Linux kernel OOM-killed the build process. This happens most frequently during &lt;code&gt;pnpm install --frozen-lockfile&lt;/code&gt; on VMs with less than 2GB RAM. The fix is to upgrade your VM to at least e2-small (2GB) or e2-medium (4GB) for the initial build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Stop the VM first&lt;/span&gt;

gcloud compute instances stop openclaw-gateway &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a

&lt;span class="c"&gt;# Change machine type&lt;/span&gt;

gcloud compute instances set-machine-type openclaw-gateway 

&lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a 

&lt;span class="nt"&gt;--machine-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;e2-small

&lt;span class="c"&gt;# Start the VM&lt;/span&gt;

gcloud compute instances start openclaw-gateway &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-central1-a

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The official OpenClaw Dockerfile now includes &lt;code&gt;NODE_OPTIONS=--max-old-space-size=2048&lt;/code&gt; to cap heap usage during dependency installation, which helps on 2GB machines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is it safe to expose port 18789 publicly?
&lt;/h3&gt;

&lt;p&gt;The recommended configuration keeps the gateway loopback-only (127.0.0.1:18789 in the Docker Compose ports directive) and accesses it via SSH tunnel. If you need public exposure, remove the &lt;code&gt;127.0.0.1:&lt;/code&gt; prefix and configure GCP firewall rules accordingly. The gateway supports token-based authentication (the 64-character hex token in &lt;code&gt;.env&lt;/code&gt;), but the official documentation advises reading the security docs before exposing any port publicly.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I access the gateway UI without keeping a terminal open?
&lt;/h3&gt;

&lt;p&gt;The reference guide does not cover persistent tunnel configurations. Practical options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Adding a &lt;code&gt;LocalForward 18789 localhost:18789&lt;/code&gt; entry to &lt;code&gt;~/.ssh/config&lt;/code&gt; for the VM&lt;/li&gt;
&lt;li&gt;  Using autossh to maintain a persistent tunnel with automatic reconnection&lt;/li&gt;
&lt;li&gt;  Running the &lt;code&gt;gcloud compute ssh&lt;/code&gt; tunnel command inside a &lt;code&gt;tmux&lt;/code&gt;or screen session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a known ergonomic gap in the current deployment pattern.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the minimum server requirements?
&lt;/h3&gt;

&lt;p&gt;The documented minimum is 2GB RAM (e2-small) for the Docker build, though 4GB (e2-medium) is recommended for reliable first builds. At steady state, the running gateway uses less memory than the build process, so e2-small is generally sufficient for ongoing operation. Software requirements are Node.js 24 (or 22.16+) and Docker, both of which are installed during setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use Cloud Run instead of Compute Engine?
&lt;/h3&gt;

&lt;p&gt;No. OpenClaw's gateway is a long-running WebSocket daemon that requires persistent local storage. Cloud Run's file system is ephemeral (data is lost when the instance stops), instances are disposable by design, and the platform is optimized for stateless HTTP services. Compute Engine is the correct GCP service for this workload.&lt;/p&gt;

&lt;h3&gt;
  
  
  How much will this cost per month?
&lt;/h3&gt;

&lt;p&gt;VM cost depends on machine type: e2-small runs about $12/month, e2-medium about $25/month. Model API costs are separate and depend on your usage and model choice (typically $0.01-$0.05 per message). The gateway software itself is free and open source. GCP egress costs for SSH tunnel traffic are negligible for personal use. Total cost for a solo developer is typically $15-30/month including moderate model API usage.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>googlecloud</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Hermes Agent: The AI That Actually Gets Smarter Every Time You Use It</title>
      <dc:creator>Sachin Shah</dc:creator>
      <pubDate>Thu, 09 Apr 2026 14:06:06 +0000</pubDate>
      <link>https://dev.to/imsachinshah/hermes-agent-the-ai-that-actually-gets-smarter-every-time-you-use-it-3k8l</link>
      <guid>https://dev.to/imsachinshah/hermes-agent-the-ai-that-actually-gets-smarter-every-time-you-use-it-3k8l</guid>
      <description>&lt;p&gt;Most AI assistants forget everything the moment a session ends. But &lt;a href="https://hermes-agent.nousresearch.com/" rel="noopener noreferrer"&gt;Hermes Agent&lt;/a&gt; builds a memory of who you are, creates reusable skills from your past conversations, and keeps getting more capable the longer it runs. It's open source, works on a $5 VPS, and you can talk to it from Telegram while it runs quietly in the cloud. This post walks you through what it is, why it's worth paying attention to, and how to get started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhctyerxke2q70lsa947n.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhctyerxke2q70lsa947n.JPG" alt="Cover Image" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Exactly Is An Agent?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before we go further, let's get one thing straight: there's a real difference between a chatbot and an agent, and it matters here.&lt;/p&gt;

&lt;p&gt;A chatbot answers your question and moves on. Every message exists in isolation. There's no memory of what you asked yesterday, no ability to take actions beyond generating text, and no way for it to get better at helping you specifically over time.&lt;/p&gt;

&lt;p&gt;An agent is different. Think of it like the difference between a search engine and an intern. The search engine returns results. The intern can go look something up, open a file, write a script, test it, fix it when it breaks, and come back to you with a finished result. They know your preferences. They get faster and more useful as time goes on.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Hermes Agent is the intern version.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Is Hermes Agent?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/nousresearch/hermes-agent" rel="noopener noreferrer"&gt;Hermes Agent&lt;/a&gt; is an open-source AI agent built by Nous Research. It is not a chatbot, and it is not a coding tool tied to an IDE. It is a standalone agent with a built-in learning loop that creates skills from experience, builds memory across sessions, and gets more capable the longer you use it. You can deploy it anywhere and talk to it from the messaging apps you already use.&lt;/p&gt;

&lt;p&gt;Let me paint a familiar picture. You spend 20 minutes setting up an AI assistant. You tell it your preferences, your project context, and how you like things done. It's helpful. Then you close the tab. The next day, you come back to a blank slate. You re-explain everything from scratch, like a first date you've already been on. If that loop sounds exhausting, Hermes Agent was built to break it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;A Built-In Learning Loop&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Most AI tools are frozen. You get the same capability on day one as you get on day one hundred. Hermes is designed around the opposite idea. So, every conversation teaches the agent something. After each session, Hermes reviews what happened and decides what's worth keeping as a permanent memory. It also does something more interesting, i.e., it creates &lt;em&gt;skills&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Are Skills?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Think of skills like recipes. The first time Hermes works through something complex like setting up a project scaffold or running a multi-step research task, it writes down the steps that worked. It stores those steps as a reusable skill. The next time you ask for something similar, it doesn't start from scratch. It pulls out that recipe and executes it faster, more reliably, and without you having to re-explain the context.&lt;/p&gt;

&lt;p&gt;These skills live outside any single conversation. They compound across sessions. The more you use Hermes, the richer its skill library gets, and because it can improve existing skills during use, the recipes get better over time.&lt;/p&gt;

&lt;p&gt;Hermes also builds a growing model of your preferences, your recurring projects, and your working style. It calls this the memory system, and it's backed by real persistent storage that survives restarts, re-deployments, and long idle periods.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key Takeaways&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here's something that surprised me about Hermes. Most AI tools require you to have your machine open and running, but Hermes can live on a \$5 VPS you never SSH into directly. It can run on serverless infrastructure that costs nearly nothing when idle because it hibernates between conversations. You set it up once, and it stays alive in the cloud doing work whether your laptop is open or not.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Supported environments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The supported environments include your local machine, Docker containers, remote SSH servers, and two serverless options called &lt;a href="https://www.daytona.io/" rel="noopener noreferrer"&gt;Daytona&lt;/a&gt; and &lt;a href="https://modal.com/" rel="noopener noreferrer"&gt;Modal&lt;/a&gt;. Daytona and Modal are the interesting ones for beginners as they handle all the infrastructure for you, and you only pay for compute when Hermes is actively doing something.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Connecting to platforms&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Since Hermes isn't tied to your laptop, you can reach it from wherever you are. It connects to over 14 messaging platforms, including Telegram, Discord, WhatsApp, Slack, and Signal.&lt;/p&gt;

&lt;p&gt;The Telegram integration is particularly worth setting up early. Once it's running, you can send Hermes a task from your phone while you're away from your desk, and it will execute it on the cloud VM, respond with results, and keep that work in memory for your next session. No need to open a laptop. No need to keep a terminal running.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Built-in tools&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Hermes ships with 47 built-in tools out of the box. These cover things like web search, file reading and writing, code execution, image generation, browser control, and more. You don't configure these individually. They're available by default, and Hermes decides which ones to reach for based on what you ask.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Browser agent&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Hermes Agent can search the web and pull real information rather than guessing from training data alone. It can run code and test it in isolated sandboxes so failures don't affect anything outside the agent. It can browse websites with full vision support, meaning it can look at a page the way a human would, rather than just parsing raw text. Hermes can do is spawn sub-agents to handle work in parallel.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;MCP support&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;On top of the built-ins, Hermes supports MCP (Model Context Protocol), which is an open standard for connecting agents to external services. If a tool you want isn't built in, there's a good chance an MCP server exists for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Getting Started in 60 Seconds&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Hermes &lt;a href="https://hermes-agent.nousresearch.com/docs/getting-started/installation" rel="noopener noreferrer"&gt;installs&lt;/a&gt; on Linux, macOS, and WSL2. Open your terminal and run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; &amp;lt;https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh&amp;gt; | bash

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjtvkr0kva0fsgw5zvh7.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjtvkr0kva0fsgw5zvh7.JPG" alt="Hermes Agent Installer Screenshots" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once installed, start your first session by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
Hermes setup

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hermes will run this by default and will also look for any existing installation, like OpenClaw or any other agent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfswiz78ozayutp164ct.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfswiz78ozayutp164ct.JPG" alt="Hermes Agent CLI Screenshort" width="800" height="707"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll be prompted to connect an AI model provider on the first run. Hermes works with multiple providers, including OpenRouter (which gives you access to hundreds of models through one API key), OpenAI, and any provider that uses a standard API format.&lt;/p&gt;

&lt;p&gt;After that first setup, you're in. Start talking by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
Hermes

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;What to Try First&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here are a few good starting points to get a feel for what Hermes actually does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have a real conversation about an ongoing project you're working on.&lt;/li&gt;
&lt;li&gt;Tell the agent your stack, your current challenges, and what you're trying to build. Let it respond and ask follow-up questions. At the end of that session, check what the agent stored in memory. You'll see it pulling out specific facts rather than saving the whole transcript.&lt;/li&gt;
&lt;li&gt;Then start a new session the next day without re-explaining anything. See what the agent already knows about you.&lt;/li&gt;
&lt;li&gt;After a couple of sessions, type something like "what skills have you created so far?" and see what's in its library.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first few skills will feel basic. After a month of regular use, the skill library starts to feel like a real productivity layer.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;One thing worth knowing before you dive in:&lt;/em&gt; Hermes is built by Nous Research, the same team behind several well-regarded open source AI models. The codebase is MIT licensed, which means you can read it, modify it, self-host it, and build on top of it without restrictions. The community around it is active, and the skills are shareable through an open standard called agentskills.io.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Where to Go From Here&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If this post made you curious, the best next step is just getting it running and having a real conversation with it. The documentation at &lt;a href="https://hermes-agent.nousresearch.com/" rel="noopener noreferrer"&gt;hermes-agent.nousresearch.com&lt;/a&gt; covers everything from basic setup to advanced topics like voice mode, scheduled automations, and research pipeline configuration.&lt;/p&gt;

&lt;p&gt;The Discord and GitHub Discussions are genuinely useful if you hit a wall - the community is active, and the team is responsive.&lt;/p&gt;

&lt;p&gt;Happy to answer questions in the comments if anything here was unclear. What would you want a persistent, self-improving agent to do for you? Drop it below 👇&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
