DEV Community

Yanko Alexandrov
Yanko Alexandrov

Posted on • Edited on

I Built a $399 Always-On AI Assistant Box That Draws Less Power Than a Light Bulbn Product Hunt!

The Mac Mini costs $599+. A cloud VPS runs $50-100/month. I wanted something in between: dedicated AI hardware that runs 24/7, doesn't spike my electricity bill, and keeps my data local.

So I built the ClawBox.

The Hardware

  • NVIDIA Jetson Orin Nano Super - 67 TOPS of AI compute
  • 512GB NVMe SSD - plenty of local storage
  • 8GB unified LPDDR5 RAM - shared between CPU and GPU
  • Carbon fiber case - lightweight, looks cool, great thermals
  • ~20W total power draw - less than an LED light bulb

What It Does

It runs OpenClaw as an always-on AI gateway. Think of it as a personal AI assistant that:

  • Responds to messages on Telegram, WhatsApp, and Discord
  • Automates web browsers (scraping, form filling, monitoring)
  • Runs cron jobs and proactive checks
  • Manages files, git repos, and SSH connections
  • Monitors emails, calendars, and notifications

The Jetson handles all the tool execution, browser automation, and gateway logic. The actual LLM inference happens via cloud APIs (Claude, GPT), so the 8GB RAM limitation doesn't matter for the AI brain - only for local processing.

Why Not Just Use a Mac Mini?

I wrote a detailed Mac Mini vs ClawBox comparison if you want the full breakdown. Here's the summary:

ClawBox Mac Mini M4 Cloud VPS
Price $399 one-time $599+ one-time $50-100/month
Power ~20W ~40-60W N/A
Annual electricity ~$20 ~$50-60 Included
GPU 67 TOPS 38 TOPS None
Privacy 100% local 100% local Cloud provider sees all
Setup time 5 minutes Hours Hours

The Mac Mini is a great machine, but it's overkill for an AI gateway. You're paying for desktop-class performance you don't need. The Jetson is purpose-built for AI workloads at a fraction of the power.

The Super Mode Trick

NVIDIA's JetPack 6.2 unlocked "Super" mode on the Orin Nano. This bumps the GPU from 40 to 67 TOPS, runs the CPU at 1.7GHz, and uses the full 25W power envelope. All through a software update - same hardware.

I pre-configure this with jetson_clocks so every clock is maxed out of the box.

Power Efficiency

This is what makes it a perfect low power AI server. At ~20W:

  • Annual electricity cost: ~$20
  • Runs silently (fanless at idle)
  • No heat issues
  • UPS-friendly (a small battery backup lasts hours)

Compare that to running a desktop PC 24/7 at 200-400W. That's $200-400/year just in electricity.

Getting Started

Check the complete hardware requirements guide first. If you want to build something similar:

  1. Get a Jetson Orin Nano Developer Kit
  2. Flash JetPack 6.2
  3. Install OpenClaw (curl -fsSL https://get.openclaw.ai | bash)
  4. Connect your messaging platforms
  5. Done. Your AI assistant is running.

Or check out the pre-configured version at openclawhardware.dev - it ships ready to go. There's also a step-by-step setup guide if you want to see exactly what the experience looks like.

Raspberry Pi Alternative?

A lot of people ask about using a Raspberry Pi instead. I wrote a full comparison here — TL;DR: the Pi works for basic cloud-only setups, but the Jetson's 67 TOPS GPU makes it a different beast for anything involving local inference or heavy browser automation.

You can also check out the cloud AI vs dedicated hardware comparison if you're deciding between a VPS and owning hardware.

What's Next

I'm exploring:

  • Local model inference for simple tasks (Phi-3, TinyLlama)
  • Voice assistant integration
  • Home automation hooks
  • Multi-device clustering

Happy to answer questions about the build, OpenClaw setup, or Jetson performance. What's your always-on AI setup look like?


If you want to skip the DIY route, I'm selling pre-built units at openclawhardware.dev. Use code **DEVTO10* at checkout for 10% off.*

Top comments (0)