DEV Community

Cover image for We just open-sourced XAI’s Macrohard, an autonomous computer-using agent
Prateek YJ
Prateek YJ

Posted on

We just open-sourced XAI’s Macrohard, an autonomous computer-using agent

Ever imagined an AI that could actually use your computer open apps, type, click, deploy virtual machines, and run workflows safely and autonomously?

We’re open-sourcing Open Computer Use, a fully transparent, open-stack system for autonomous computer control.


🚀 What it does

Open Computer Use lets AI agents go beyond APIs they can:

  • Deploy and manage virtual machines (Docker or full VMs)
  • Execute CLI commands or control desktops and browsers
  • Automate software installs, builds, and tests
  • Stream logs, screenshots, and progress in real time
  • Run in sandboxed, permission-based environments for safety

Everything frontend, backend, orchestration, sandbox, and agents is open source.

🧩 Repo: github.com/LLmHub-dev/open-computer-use


💡 Why this matters

Most “AI agents” today stop at the text layer they talk about what they would do.
We wanted something that can actually do it.

Think of it like XAI’s Macrohard, but:

  • 100% open-source
  • Self-hostable and transparent
  • Sandbox-safe
  • Built with a modular architecture anyone can extend

We’re releasing it so devs, researchers, and companies can run, study, and improve autonomous computer agents safely without depending on closed systems.


⚙️ How to try it

git clone https://github.com/LLmHub-dev/open-computer-use.git
cd open-computer-use
docker compose up
Enter fullscreen mode Exit fullscreen mode

Then launch the web interface → create an agent session → watch it deploy a VM, run commands, and stream live feedback.
You can even write your own plugins to extend its capabilities.


🧠 Under the hood

  • Frontend: Next.js + Tailwind
  • Backend: FastAPI + Python
  • Orchestration: Docker + sandboxed VMs
  • Agent core: modular planners + multi-process action engine
  • Safety: permission gating, audit logs, container isolation

It’s built for scalability, so you can run many agents concurrently or integrate your own LLM router.


🔒 Safety first

This kind of agent is powerful we’ve made security a first-class feature:

  • Runs in sandboxed environments
  • Requires explicit permission for file/system access
  • Full audit trail of every action
  • No network or credential sharing unless explicitly allowed

Transparency builds trust that’s why every component is open.


🌍 What’s next

We’re working on:

  • Multi-VM orchestration
  • Windows/macOS support
  • Plugin marketplace
  • Custom LLM routing support via LLmHUB

If you want to build or contribute, we’d love your help check out the repo and open a PR!


❤️ Join us

This is an open project built for the developer community.
If you find it exciting star the repo, share feedback, or build your own extensions.

👉 https://github.com/LLmHub-dev/open-computer-use

Top comments (0)