DEV Community

Alex Chen
Alex Chen

Posted on

How I Built a 24/7 Automation Bot That Runs on a VPS

How I Built a 24/7 Automation Bot That Runs on a $5 VPS

No cloud functions. No serverless complexity. Just a Node.js process that's been running for 6 months straight.

The Problem

Like many developers, I had a list of repetitive tasks that needed doing every day:

  • Check GitHub repos for new issues I could contribute to
  • Monitor my pull requests for review comments
  • Send daily status reports to my team
  • Scan websites for price changes or content updates
  • Push notifications when something important happened

I tried everything: cron jobs on my laptop (only works when it's on), GitHub Actions (great but limited to repo events), Zapier (expensive at scale), and even scheduled Lambda functions (overkill for simple tasks).

What I really wanted was a persistent background worker that could:

  1. Run 24/7 without me thinking about it
  2. Interact with web pages like a real browser
  3. Send messages to multiple platforms (Slack, Discord, email)
  4. Execute code and shell commands
  5. Cost less than a cup of coffee per month

Here's how I built it.

The Stack

┌─────────────────────────────────┐
│         $5 VPS (Ubuntu)         │
│                                 │
│  ┌───────────┐   ┌──────────┐  │
│  │ Node.js   │   │ Puppeteer│  │
│  │ Process   │──▶│ / Playwright│  │
│  │ (Gateway) │   │ (Browser)│  │
│  └─────┬─────┘   └──────────┘  │
│        │                        │
│  ┌─────▼──────────────────────┐│
│  │ Cron Scheduler             ││
│  │ (every 5min / 30min / 1h) ││
│  └─────┬──────────────────────┘│
│        │                        │
│  ┌─────▼──────────────────────┐│
│  │ Message Routing            ││
│  │ → Feishu / Telegram / Slack││
│  └────────────────────────────┘│
└─────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Total cost: ~$5/month (Vultr/DigitalOcean/LightSail basic tier)

Step 1: The Core Loop

The heart of the system is a simple scheduler:

const cron = require('node-cron');

// Run every 5 minutes — check for urgent stuff
cron.schedule('*/5 * * * *', async () => {
  await checkUrgentAlerts();
});

// Run every 30 minutes — routine monitoring  
cron.schedule('*/30 * * * *', async () => {
  await monitorGitHubPRs();
  await scanForNewTasks();
});

// Run every hour — reports and summaries
cron.schedule('0 * * * *', async () => {
  await sendHourlyDigest();
});
Enter fullscreen mode Exit fullscreen mode

Nothing fancy. node-cron is battle-tested, zero-dependency, and does exactly what it says.

Pro tip: Use fs.writeFileSync to persist your last-run timestamp. If your process crashes and restarts, you don't want to re-fire jobs that already ran:

function shouldRun(jobName, intervalMinutes) {
  const stateFile = `./cron-state/${jobName}.json`;
  const now = Date.now();

  try {
    const { lastRun } = JSON.parse(fs.readFileSync(stateFile));
    if (now - lastRun < intervalMinutes * 60 * 1000) return false;
  } catch { /* first run */ }

  fs.writeFileSync(stateFile, JSON.stringify({ lastRun: now }));
  return true;
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Browser Automation

The real power comes from browser automation. I use Playwright (though Puppeteer works too):

const { chromium } = require('playwright');

async function checkWebsite(url, selector) {
  const browser = await chromium.launch({ headless: true });
  const page = await browser.newPage();

  await page.goto(url, { waitUntil: 'networkidle' });

  // Check if a specific element exists (price drop? new item?)
  const element = await page.$(selector);
  const found = !!element;

  if (found) {
    const text = await element.innerText();
    await notify(`Found match on ${url}: ${text}`);
  }

  await browser.close();
  return found;
}
Enter fullscreen mode Exit fullscreen mode

Why not just use fetch? Because half the modern web is JavaScript-rendered. If you need to click buttons, fill forms, or wait for lazy-loaded content, you need a real browser.

Memory tip: Always await browser.close(). A leaked browser instance eats ~100-200MB of RAM. On a $5 VPS with 4GB RAM, that adds up fast. I learned this the hard way after crashing my server 3 times in the first week.

Step 3: Multi-Platform Messaging

The bot needs to talk to humans. Here's my lightweight approach using webhooks:

async function sendFeishuMessage(webhookUrl, text) {
  await fetch(webhookUrl, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      msg_type: "text",
      content: { text }
    })
  });
}

async function sendTelegramMessage(botToken, chatId, text) {
  await fetch(
    `https://api.telegram.org/bot${botToken}/sendMessage`,
    {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        chat_id: chatId,
        text,
        parse_mode: 'Markdown'
      })
    }
  );
}
Enter fullscreen mode Exit fullscreen mode

Each platform has its own webhook/API pattern, but they're all just HTTP POST requests under the hood.

Step 4: GitHub Integration

Since I do open source work, monitoring PRs is critical:

async function monitorPRs() {
  // Using GitHub REST API (no library needed)
  const response = await fetch(
    'https://api.github.com/repos/:owner/:repo/pulls?state=open&per_page=30',
    { 
      headers: { 
        'Authorization': `token ${process.env.GITHUB_TOKEN}`,
        'User-Agent': 'automation-bot'
      }
    }
  );

  const prs = await response.json();

  for (const pr of prs) {
    const lastCheck = getLastCheckTime(pr.id);

    // Check for new comments/reviews since last check
    const comments = await fetch(pr.comments_url, {
      headers: { 'Authorization': `token ${process.env.GITHUB_TOKEN}` }
    }).then(r => r.json());

    const newComments = comments.filter(c => 
      new Date(c.created_at) > new Date(lastCheck)
    );

    if (newComments.length > 0) {
      await notify(`PR #${pr.number} has ${newComments.length} new comment(s)!`);
    }

    updateCheckTime(pr.id);
  }
}
Enter fullscreen mode Exit fullscreen mode

Rate limit warning: GitHub allows 5,000 authenticated requests/hour. With 30 PRs checked every 30 minutes, that's ~48 requests/day. You won't hit limits unless you're doing something much bigger.

Step 5: Making It Resilient

A 24/7 process WILL crash. Here's my survival kit:

Auto-restart with systemd

# /etc/systemd/system/automation-bot.service
[Unit]
Description=Automation Bot
After=network.target

[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/bot
ExecStart=/usr/bin/node index.js
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode
sudo systemctl enable automation-bot
sudo systemctl start automation-bot
# Now it auto-restarts on crash!
Enter fullscreen mode Exit fullscreen mode

Health Check Script

#!/bin/bash
# healthcheck.sh — run via system cron every 5 min
if ! pgrep -f "node index.js" > /dev/null; then
  echo "$(date): Bot is DOWN! Restarting..." >> /var/log/bot-health.log
  sudo systemctl restart automation-bot
fi

# Also check: is it actually responding?
curl -sf http://localhost:3000/health || {
  echo "$(date): Bot not responding!" >> /var/log/bot-health.log
  sudo systemctl restart automation-bot
}
Enter fullscreen mode Exit fullscreen mode

Memory Watchdog

#!/bin/bash
# Check memory usage, restart if > 80%
MEM_PCT=$(free | awk '/Mem/{printf "%.0f", $3/$2*100}')
if [ "$MEM_PCT" -gt 80 ]; then
  echo "$(date): Memory at ${MEM_PCT}%! Restarting..." >> /var/log/bot-health.log
  sudo systemctl restart automation-bot
fi
Enter fullscreen mode Exit fullscreen mode

What This Bot Actually Does For Me

After 6 months of running, here's what it handles automatically:

Task Frequency Time saved/week
PR monitoring (25+ repos) Every 5 min ~7 hours
Daily status report Every 3 hours ~3 hours
Website change detection Every 30 min ~5 hours
GitHub bounty scanning Every 3 hours ~8 hours
Notification routing Real-time ~4 hours
Total ~27 hours/week

That's basically a full-time job's worth of work, automated for $5/month.

Things I Learned the Hard Way

  1. Log everything. When something breaks at 3 AM, you need to know WHAT was happening, not just THAT it broke. Structured JSON logs > console.log.

  2. Don't poll too aggressively. Every API has rate limits. Start conservative (every 5-15 min), speed up only when you need to.

  3. Graceful shutdown. Handle SIGTERM so you can save state before dying:

   process.on('SIGTERM', async () => {
     console.log('Shutting down gracefully...');
     await saveState();
     process.exit(0);
   });
Enter fullscreen mode Exit fullscreen mode
  1. Secrets belong in environment variables. Never hardcode tokens. Use a .env file (gitignored) or a secrets manager.

  2. Start small. My first version did ONE thing: check one GitHub repo every hour. Once that worked reliably for a week, I added another task. Rinse and repeat.

The Bigger Picture

This isn't just about saving time. It's about building a personal automation infrastructure that grows with you.

Once you have a 24/7 bot that can browse the web, call APIs, and send messages, you can layer on top of it:

  • Content generation (auto-publish blog posts)
  • Price monitoring (track products across sites)
  • Competitor analysis (scrape and compare)
  • Customer support (auto-respond to common queries)
  • Data pipelines (collect → transform → store)

The $5 VPS is just the beginning.


What are you automating? Drop a comment — I'd love to hear what other developers are building.

If you found this useful, follow me on DEV for more posts about practical automation and building things that pay for themselves.

Top comments (0)