DEV Community

Maitrish Mukherjee
Maitrish Mukherjee

Posted on

How I Built a Personal AI Agent That Runs on My Google Cloud VM — And Powers My Entire Portfolio

How I Built a Personal AI Agent That Runs on My Google Cloud VM — And Powers My Entire Portfolio

A deep dive into OpenClaw, serverless integrations, and the architecture behind a site that writes its own content


The Setup Nobody Talks About

Most people building "AI-powered" portfolios are using third-party SaaS tools glued together with Zapier and hope. That's fine for prototyping. But there's a point where you outgrow that — where you want the agent to actually do things on your behalf, across multiple platforms, on your own infrastructure.

So I built one. It's been running for a week now. Here's what actually went into it.


What Is OpenClaw, Really?

OpenClaw is an AI agent framework that runs as a persistent service. Not a chatbot you open in a tab — a background process you can talk to from Telegram, any chat platform, or directly via API. It has memory between sessions, tools you can extend via skills, and it's designed to live on a server somewhere.

The core loop:

  1. You send a message (Telegram, webhook, etc.)
  2. OpenClaw routes it to the right skill/tool
  3. The agent acts — reads APIs, writes files, calls webhooks
  4. You get a response back

It's closer to a personal API layer with a conversational face than a chatbot.


The Stack

Layer Technology
Agent framework OpenClaw
Hosting Google Cloud VM (f1-micro, Debian)
Public access Cloudflare tunnel (no static IP)
Interface Telegram bot
Portfolio Next.js + Framer Motion
CMS / Content Dev.to, GitHub, Notion

Total cost to run this: $0/month on GCP free tier. Cloudflare tunnel is free. OpenClaw has a free tier.


Step 1: The VM Setup

I spun up a f1-micro instance on Google Cloud Platform. Debian 12, 10GB SSD. Free tier eligible — $0/month.

# Install nvm and Node.js 20 LTS
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
nvm install 20 && nvm use 20

# Install OpenClaw
npm install -g openclaw
openclaw init
Enter fullscreen mode Exit fullscreen mode

The tricky part nobody warns you about: OpenClaw's gateway runs on a local port by default. To make it accessible from outside your network (without a static IP), you use a Cloudflare tunnel.

# Download cloudflared
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 \
  -o /usr/local/bin/cloudflared
chmod +x /usr/local/bin/cloudflared

# Create a tunnel to your OpenClaw gateway port (18789 is default)
cloudflared tunnel --url http://localhost:18789
Enter fullscreen mode Exit fullscreen mode

Cloudflare gives you a trycloudflare.com URL. Bookmark it — it changes each time you restart the tunnel. For production you'd set up a named tunnel with a real domain, but for personal use the quick tunnel works fine.


Step 2: The Telegram Bot

Create a bot via BotFather in Telegram. Get the API token. Then in your OpenClaw config:

openclaw config set telegram.bot_token "your_bot_token_from_botfather"
Enter fullscreen mode Exit fullscreen mode

Now you can DM your bot and talk to your agent from anywhere in the world. This sounds simple but it's actually powerful — your agent is reachable from any device with Telegram, which is basically every device you own.

The pattern I use most:

me: update my portfolio with latest github repos
agent: [runs fetch script, rebuilds site, confirms update]

me: publish this article to dev.to
agent: [drafts, asks for review, publishes on approval]
Enter fullscreen mode Exit fullscreen mode

Step 3: The Integrations — Dev.to, GitHub, Notion

This is where it gets interesting. The agent can actually do things on your behalf, not just answer questions.

Dev.to

I wrote three scripts:

  • devto-fetch.js — pull all articles from your Dev.to account
  • devto-publish.js — publish a new article
  • devto-stats.js — get reactions, views, read time
// Fetch articles from Dev.to
const response = await fetch(`https://dev.to/api/articles?username=maitrish&per_page=20`, {
  headers: { "api-key": process.env.DEVTO_API_KEY }
});
const articles = await response.json();
Enter fullscreen mode Exit fullscreen mode

The key thing I learned: Dev.to's publish endpoint expects the payload wrapped in an article key:

// Correct
{ article: { title, body_markdown, tags, published: true } }

// Will fail with "NilClass" error
{ title, body_markdown, tags, published: true }
Enter fullscreen mode Exit fullscreen mode

Also: the tag limit is 4 tags max. I learned this the hard way.

GitHub

The agent reads my GitHub profile and repo list via the REST API:

# What the agent fetches
curl -H "Authorization: token $GITHUB_TOKEN" \
  https://api.github.com/users/maitrish1/repos?sort=pushed&per_page=30
Enter fullscreen mode Exit fullscreen mode

My GitHub PAT has repo + read:user scopes. The agent uses this to:

  • Pull repo names and descriptions
  • Update the portfolio's "Projects" section
  • Create new repos when I ask it to scaffold a project

One practical insight: GitHub repos without descriptions look dead even when they're not. Every repo on my profile that had no description got updated by the agent with a one-line summary. Takes 30 seconds, makes the profile look 10x more credible.

Notion

Notion is the CMS layer. I write article drafts in Notion, share them with the integration, and the agent fetches and publishes them. The flow:

Notion (draft page) → Agent fetches via Notion API → Dev.to publish API → Live article
Enter fullscreen mode Exit fullscreen mode

The Notion API requires sharing pages explicitly with the integration — it won't see pages you don't add it to. This is good security, annoying to set up.

// Search for pages
fetch('https://api.notion.com/v1/search', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${NOTION_TOKEN}`,
    'Notion-Version': '2022-06-28',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({ query: 'article draft', filter: { property: 'object', value: 'page' } })
})
Enter fullscreen mode Exit fullscreen mode

Step 4: The Portfolio

The portfolio sits at the end of this chain. It's a Next.js 14 site with:

  • Static export (output: "export") — serves from a simple out/ directory
  • Framer Motion for animations — glitch text, typewriter effect, scroll reveals
  • Data pulled at build time from the APIs
Dev.to articles ──fetch──→ siteConfig ──build──→ Portfolio live at cloudflare URL
GitHub repos    ──fetch──→ siteConfig ──build──→ Same build step
Enter fullscreen mode Exit fullscreen mode

The key architectural decision: the portfolio is static. No backend, no database. Everything is fetched at build time. This means:

  • It loads fast (pure static HTML + JS)
  • It works even if the APIs are down
  • The cloudflare tunnel only needs to be running during the build, not for visitors

The downside: articles and repos only update when I rebuild. I'm planning to add ISR (incremental static regeneration) so it updates automatically every few hours without a full rebuild.


The Part Nobody Talks About: Security

When you're running an agent with API access to your GitHub, Dev.to, and Notion, security matters. Here's what I did:

  1. Scoped tokens: GitHub PAT has only repo and read:user, not full account access. Dev.to API key is read/write for articles only. Notion token can only see pages shared with it.

  2. Environment variables, not secrets in code: All API keys live in a ~/.env file with chmod 600. Never committed to git.

  3. Cloudflare tunnel is short-lived: The quick tunnel URL changes every restart. For anything permanent, I'd use a named tunnel with proper authentication.

  4. The agent has memory, but limited context: It remembers who I am and what we're working on, but the context window isn't infinite. I structured the memory system with daily log files and a long-term MEMORY.md so context stays relevant.


What I'd Do Differently

1. Start with ISR from day one. Static export is great but the "rebuild to update" cycle is annoying. Next.js ISR would have been a better default.

2. Use a named Cloudflare tunnel, not a quick tunnel. Quick tunnel URLs are random and change on restart. A named tunnel with a fixed subdomain would mean the portfolio URL never changes.

3. Set up proper log rotation. pm2 logs grow. After a week of running, the log files are already taking space. pm2 install pm2-logrotate would have been worth doing on day one.

4. The Telegram bot should have been configured withCommands from BotFather from the start. Rich keyboard commands would make the agent easier to interact with on mobile.


The Result

A portfolio that:

  • Updates automatically when I push to GitHub or publish on Dev.to
  • Can be managed entirely from Telegram
  • Runs on a VM that costs $0/month
  • Has an agent that can write and publish articles on my behalf

The interesting part isn't the tech — it's that this is a system that runs itself. I check Telegram in the morning, the agent has been running job scans, updating the portfolio, and managing content while I slept. That's the shift from "using AI" to "having an AI-powered system."


This article was researched, drafted, and published with the system described above. The agent wrote the initial draft; I reviewed and refined it before publishing.

Top comments (0)