<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ben Utting</title>
    <description>The latest articles on DEV Community by Ben Utting (@benutting).</description>
    <link>https://dev.to/benutting</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/benutting"/>
    <language>en</language>
    <item>
      <title>How I Migrated 6 Skills From Manus AI to a OpenClaw VPS</title>
      <dc:creator>Ben Utting</dc:creator>
      <pubDate>Mon, 13 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://dev.to/benutting/how-i-migrated-6-skills-from-manus-ai-to-a-openclaw-vps-32ae</link>
      <guid>https://dev.to/benutting/how-i-migrated-6-skills-from-manus-ai-to-a-openclaw-vps-32ae</guid>
      <description>&lt;p&gt;A client was paying $200 a month for Manus AI to run six automation skills: CRM management, web scraping, property lookups, AI avatar videos, meeting transcripts and investment reports. The skills worked, but the subscription added up and the platform locked him into their infrastructure.&lt;/p&gt;

&lt;p&gt;We moved everything to a self-hosted OpenClaw instance on a Hostinger VPS. Pay-per-use API costs instead of a flat monthly fee. The migration took a day. Most of that day was spent on two bugs that had nothing to do with the skills themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;The client runs a real estate investing and marketing business. His stack centred on GoHighLevel for CRM, Apify for web scraping, HeyGen for avatar videos, Melissa Data and Rentcast for property intelligence, and a custom meeting processor for Zoom calls. All six of these ran as skills inside Manus AI.&lt;/p&gt;

&lt;p&gt;The target was a Hostinger VPS running Ubuntu 24.04 with OpenClaw deployed in Docker. The model backend switched to OpenRouter, which meant he'd pay per token instead of a flat subscription. For the volume he was running, that's a significant saving.&lt;/p&gt;

&lt;p&gt;The six skills we migrated:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;apify-actor-finder&lt;/strong&gt;: scrapes any site (Google Maps, Instagram, LinkedIn) and returns CSV&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gohighlevel-api&lt;/strong&gt;: full CRM control, contacts, opportunities, SMS, appointments, workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;heygen-avatar-video&lt;/strong&gt;: generates AI avatar videos from a text script&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rei-ai-zoom-processor&lt;/strong&gt;: turns meeting transcripts into structured summaries with PDF output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;melissa-data-information&lt;/strong&gt;: property ownership lookups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rentcast-property-report&lt;/strong&gt;: property value, rent estimates and market stats&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each skill had its own Python scripts, API keys and reference docs. The skill definitions (SKILL.md files) translated cleanly to OpenClaw's format. The scripts needed minor patching, not rewrites.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually worked
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Model selection mattered more than I expected.&lt;/strong&gt; OpenClaw's default model was Kimi K2.5, which is free tier on OpenRouter. It handles chat fine but does not reliably execute tool calls, which is exactly what skill scripts need. Every skill failed silently or returned garbage output.&lt;/p&gt;

&lt;p&gt;Switching to Claude Sonnet 4.6 fixed it immediately. Every skill executed correctly on the first attempt. The cost difference is real ($3 per million tokens vs free) but reliability is not optional when you're running production automations for a client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tools profile setting is easy to miss.&lt;/strong&gt; OpenClaw has a &lt;code&gt;tools.profile&lt;/code&gt; setting in its config. The default is &lt;code&gt;"messaging"&lt;/code&gt;, which gives the model text-only capabilities. Skills that run Python scripts need &lt;code&gt;"full"&lt;/code&gt;, which enables bash execution and file access. Without it, the model can see the skill definition but can't actually run the scripts. No error message, just nothing happens.&lt;/p&gt;

&lt;p&gt;One config line: &lt;code&gt;"tools": { "profile": "full" }&lt;/code&gt;. That's it. But if you don't know to look for it, you'll spend an hour wondering why perfectly valid skills produce no output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patching Manus-specific dependencies was straightforward.&lt;/strong&gt; The meeting processor skill referenced &lt;code&gt;gemini-2.5-flash&lt;/code&gt; as its LLM (not accessible via a standard OpenAI client) and &lt;code&gt;manus-md-to-pdf&lt;/code&gt; for PDF generation (a Manus-internal tool). Two lines changed: the model switched to &lt;code&gt;gpt-4o-mini&lt;/code&gt; and the PDF engine switched to &lt;code&gt;pandoc&lt;/code&gt; with &lt;code&gt;weasyprint&lt;/code&gt;. Everything else in the script stayed the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  What broke
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bug one: legacy API credentials.&lt;/strong&gt; The GoHighLevel skill wouldn't authenticate. Every API call returned 401. The skill script had the correct API key hardcoded, but OpenClaw's config file (&lt;code&gt;openclaw.json&lt;/code&gt;) had an older JWT token stored in its environment variables section, left over from a previous contractor's setup.&lt;/p&gt;

&lt;p&gt;The environment variable took precedence over the key in the script. So the skill was sending a dead v1 legacy token on every request, ignoring the valid key entirely.&lt;/p&gt;

&lt;p&gt;The fix: replace the env var with a current Private Integration Token from the GoHighLevel dashboard. But the lesson is broader. When you migrate skills between platforms, check what credentials the platform injects via environment. Skill-level credentials and platform-level credentials can collide, and the platform usually wins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bug two: conflicting skill versions.&lt;/strong&gt; The same previous contractor had installed three older GoHighLevel skills (&lt;code&gt;ghl-v1-api&lt;/code&gt;, &lt;code&gt;ghl-v1-contacts&lt;/code&gt;, &lt;code&gt;ghl-v1-tasks&lt;/code&gt;) that used the v1 API. When the client asked the model to "pull my contacts", it would sometimes pick one of the old skills instead of the new &lt;code&gt;gohighlevel-api&lt;/code&gt; skill.&lt;/p&gt;

&lt;p&gt;The model doesn't know which skill is current. It sees four skills that all claim to handle GoHighLevel and picks one. Sometimes it picks wrong.&lt;/p&gt;

&lt;p&gt;The fix was simple: disable the three old skills in the gateway dashboard. They're still in the config but marked &lt;code&gt;enabled: false&lt;/code&gt;. The model now only sees one GoHighLevel skill and uses it every time.&lt;/p&gt;

&lt;p&gt;This is the kind of bug that only shows up in real environments. In a clean test install, there are no legacy skills to conflict with. In a client's actual system, there's always history.&lt;/p&gt;

&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;Six skills running on a self-hosted VPS. No monthly subscription. API costs scale with actual usage instead of a flat fee. The client has full control of the server, the model, and the skills.&lt;/p&gt;

&lt;p&gt;Total migration time was about six hours. Four of those were the two bugs above. The actual skill porting (copying files, installing Python dependencies, testing each skill with real data) took around two hours.&lt;/p&gt;

&lt;p&gt;If I did this migration again, I'd add two checks to the start of every engagement: audit the existing environment variables for stale credentials, and list all installed skills to catch version conflicts before they surface as mysterious failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I'm writing this up
&lt;/h2&gt;

&lt;p&gt;I'm an AI Automation Engineer. I build Claude Code, OpenClaw, N8N and MCP systems for real clients. Every project gets written up here: what worked, what broke, what I'd do differently. No demos, no prototypes.&lt;/p&gt;

&lt;p&gt;If you're running AI skills on a managed platform and the subscription doesn't make sense for your volume, self-hosting is viable. The migration is not complicated, but the gotchas are real and they're not in the documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ctrlaltautomate.com&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>automation</category>
      <category>openclaw</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>From 2 Hours of Research to a Script in 10 Minutes: Building a Custom OpenClaw Skill for a Content Creator</title>
      <dc:creator>Ben Utting</dc:creator>
      <pubDate>Sat, 11 Apr 2026 08:06:05 +0000</pubDate>
      <link>https://dev.to/benutting/from-2-hours-of-research-to-a-script-in-10-minutes-building-a-custom-openclaw-skill-for-a-content-25p8</link>
      <guid>https://dev.to/benutting/from-2-hours-of-research-to-a-script-in-10-minutes-building-a-custom-openclaw-skill-for-a-content-25p8</guid>
      <description>&lt;p&gt;A client came to me on Upwork with a straightforward problem: too much time spent before they even hit record.&lt;/p&gt;

&lt;p&gt;Their content workflow involved manually hunting for pain points on Reddit and X, pulling inspiration from creators they admired, writing hooks, structuring scripts, all before they could sit down in front of a camera. Solid process, but slow. An hour to two hours per piece of content, just in prep.&lt;/p&gt;

&lt;p&gt;They'd heard about OpenClaw and had a rough sense it could help. They just weren't sure how to make it actually do what they needed. That's where I came in.&lt;/p&gt;

&lt;h3&gt;
  
  
  What We Built
&lt;/h3&gt;

&lt;p&gt;The engagement ran for about a week. The centrepiece was a custom OpenClaw skill: a &lt;strong&gt;Content Research Assistant&lt;/strong&gt; that turns a raw idea — a topic, a brain dump, a link, a vague prompt, into a researched, structured Instagram Reel script, all inside a chat interface.&lt;/p&gt;

&lt;p&gt;The skill runs four stages in sequence:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Content Brief&lt;/strong&gt;&lt;br&gt;
Before any research happens, the input gets converted into a structured brief: topic, angle, target audience pain point, desired viewer outcome, medium. This keeps everything focused and prevents the AI from going wide when it should go deep.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Platform Research&lt;/strong&gt;&lt;br&gt;
Web searches run across Reddit, X/Twitter, YouTube, and LinkedIn — in parallel where possible — to surface how real people describe their problems. The goal is raw language: the exact phrases people use when they're frustrated, confused, or searching for answers. That's where good hooks come from.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Creator Inspiration&lt;/strong&gt;&lt;br&gt;
The client had a specific list of creators they studied — some in their niche, some outside it. The skill pulls recent content from relevant creators and extracts structural patterns: hook formats, script pacing, CTA styles. Outside-niche creators are used for format only, never topic. The distinction matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Script Writing&lt;/strong&gt;&lt;br&gt;
A full script gets written in the client's brand voice — three hook options (one creator-inspired, one adapted, one original), a 45-60 second core script broken into sections, and two CTA variants. Each hook option is labelled so they know where it came from and can make an informed choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running It Through OpenClaw
&lt;/h3&gt;

&lt;p&gt;We setup OpenClaw to run locally via Docker on the client's MacBook M4. The interface was WhatsApp so the entire workflow lives in a chat thread. They type a topic or paste a brain dump, and within minutes they have a researched brief, platform insights, and a ready-to-record script.&lt;/p&gt;

&lt;p&gt;That context matters for how the skill was built. Output has to work in WhatsApp: short paragraphs, bold text where needed, no markdown tables. The skill sends the brief first, then research, then scripts as follow-up messages, not one wall of text.&lt;/p&gt;

&lt;p&gt;The result: what used to take an hour or two of manual work now takes around &lt;strong&gt;10 minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What struck me during the engagement was how quickly the client grasped what was possible once OpenClaw was running. The content skill was the proof of concept, but they could immediately see how the same approach applied to managing relationships, managing their week, handling admin. The whole operating system, running in a chat app they already use.&lt;/p&gt;

&lt;h3&gt;
  
  
  What This Type of Build Looks Like
&lt;/h3&gt;

&lt;p&gt;If you're a creator, solopreneur, or small team spending significant time on recurring research or prep work, this pattern applies directly to you. The specifics change, the platforms you research, the creators you study, the output format, but the structure doesn't.&lt;/p&gt;

&lt;p&gt;A custom OpenClaw skill is a workflow with memory, structure, and your preferences baked in, built once, then triggered with a word or a phrase. It knows the research steps, the format you want, the creators you draw from. You get something useful at the end without rebuilding the context every time.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>claude</category>
      <category>openclaw</category>
      <category>contentwriting</category>
    </item>
  </channel>
</rss>
