A client was paying $200 a month for Manus AI to run six automation skills: CRM management, web scraping, property lookups, AI avatar videos, meeting transcripts and investment reports. The skills worked, but the subscription added up and the platform locked him into their infrastructure.
We moved everything to a self-hosted OpenClaw instance on a Hostinger VPS. Pay-per-use API costs instead of a flat monthly fee. The migration took a day. Most of that day was spent on two bugs that had nothing to do with the skills themselves.
The setup
The client runs a real estate investing and marketing business. His stack centred on GoHighLevel for CRM, Apify for web scraping, HeyGen for avatar videos, Melissa Data and Rentcast for property intelligence, and a custom meeting processor for Zoom calls. All six of these ran as skills inside Manus AI.
The target was a Hostinger VPS running Ubuntu 24.04 with OpenClaw deployed in Docker. The model backend switched to OpenRouter, which meant he'd pay per token instead of a flat subscription. For the volume he was running, that's a significant saving.
The six skills we migrated:
- apify-actor-finder: scrapes any site (Google Maps, Instagram, LinkedIn) and returns CSV
- gohighlevel-api: full CRM control, contacts, opportunities, SMS, appointments, workflows
- heygen-avatar-video: generates AI avatar videos from a text script
- rei-ai-zoom-processor: turns meeting transcripts into structured summaries with PDF output
- melissa-data-information: property ownership lookups
- rentcast-property-report: property value, rent estimates and market stats
Each skill had its own Python scripts, API keys and reference docs. The skill definitions (SKILL.md files) translated cleanly to OpenClaw's format. The scripts needed minor patching, not rewrites.
What actually worked
Model selection mattered more than I expected. OpenClaw's default model was Kimi K2.5, which is free tier on OpenRouter. It handles chat fine but does not reliably execute tool calls, which is exactly what skill scripts need. Every skill failed silently or returned garbage output.
Switching to Claude Sonnet 4.6 fixed it immediately. Every skill executed correctly on the first attempt. The cost difference is real ($3 per million tokens vs free) but reliability is not optional when you're running production automations for a client.
The tools profile setting is easy to miss. OpenClaw has a tools.profile setting in its config. The default is "messaging", which gives the model text-only capabilities. Skills that run Python scripts need "full", which enables bash execution and file access. Without it, the model can see the skill definition but can't actually run the scripts. No error message, just nothing happens.
One config line: "tools": { "profile": "full" }. That's it. But if you don't know to look for it, you'll spend an hour wondering why perfectly valid skills produce no output.
Patching Manus-specific dependencies was straightforward. The meeting processor skill referenced gemini-2.5-flash as its LLM (not accessible via a standard OpenAI client) and manus-md-to-pdf for PDF generation (a Manus-internal tool). Two lines changed: the model switched to gpt-4o-mini and the PDF engine switched to pandoc with weasyprint. Everything else in the script stayed the same.
What broke
Bug one: legacy API credentials. The GoHighLevel skill wouldn't authenticate. Every API call returned 401. The skill script had the correct API key hardcoded, but OpenClaw's config file (openclaw.json) had an older JWT token stored in its environment variables section, left over from a previous contractor's setup.
The environment variable took precedence over the key in the script. So the skill was sending a dead v1 legacy token on every request, ignoring the valid key entirely.
The fix: replace the env var with a current Private Integration Token from the GoHighLevel dashboard. But the lesson is broader. When you migrate skills between platforms, check what credentials the platform injects via environment. Skill-level credentials and platform-level credentials can collide, and the platform usually wins.
Bug two: conflicting skill versions. The same previous contractor had installed three older GoHighLevel skills (ghl-v1-api, ghl-v1-contacts, ghl-v1-tasks) that used the v1 API. When the client asked the model to "pull my contacts", it would sometimes pick one of the old skills instead of the new gohighlevel-api skill.
The model doesn't know which skill is current. It sees four skills that all claim to handle GoHighLevel and picks one. Sometimes it picks wrong.
The fix was simple: disable the three old skills in the gateway dashboard. They're still in the config but marked enabled: false. The model now only sees one GoHighLevel skill and uses it every time.
This is the kind of bug that only shows up in real environments. In a clean test install, there are no legacy skills to conflict with. In a client's actual system, there's always history.
The result
Six skills running on a self-hosted VPS. No monthly subscription. API costs scale with actual usage instead of a flat fee. The client has full control of the server, the model, and the skills.
Total migration time was about six hours. Four of those were the two bugs above. The actual skill porting (copying files, installing Python dependencies, testing each skill with real data) took around two hours.
If I did this migration again, I'd add two checks to the start of every engagement: audit the existing environment variables for stale credentials, and list all installed skills to catch version conflicts before they surface as mysterious failures.
Why I'm writing this up
I'm an AI Automation Engineer. I build Claude Code, OpenClaw, N8N and MCP systems for real clients. Every project gets written up here: what worked, what broke, what I'd do differently. No demos, no prototypes.
If you're running AI skills on a managed platform and the subscription doesn't make sense for your volume, self-hosting is viable. The migration is not complicated, but the gotchas are real and they're not in the documentation.
ctrlaltautomate.com
Top comments (0)