If you’ve set up automations in OpenClaw and they worked for a few hours then stopped silently, this is for you. The agent forgets its instructions, cron jobs show up empty, and you end up babysitting something that was supposed to be autonomous. I had the same problem. It took me a week of trial and error to get my agent to actually run on its own. If you’re hitting the same wall, here’s what fixed it for me.
One thing before we start: don’t use OpenClaw to configure itself. Use Claude Code or any coding agent to write the skill files, the rules, the scripts. Then let OpenClaw execute them. OpenClaw is good at running systems but less at building them. Sometimes when I asked it to write its own config, something would be off or missing and I couldn’t figure out why. Building the files externally and dropping them into the workspace was just more predictable.
Thanks for reading! Subscribe for free to receive new posts and support my work.
The skill file
Chat instructions don’t persist. You explain the tone you want, the agent nails it once, then forgets after compaction. Three sessions later it’s back to “Great thread! This is indeed a crucial topic in the AI landscape.”
What helped most was putting the instructions into a skill file. A markdown file in your workspace that the agent reads before every action. Mine has four sections:
- identity (who the agent pretends to be on each platform),
- voice rules (max 2 sentences, no hashtags, no AI filler, with concrete good and bad examples),
- posting rules (when to mention my product and when not to),
- and anti-drift rules.
The anti-drift section seems to make the biggest difference so far. Re-read the skill file before every session. Start fresh every 3 posts. If something fails, stop and report instead of retrying. Log every action with SUCCESS or FAILED. Still early but the agent has been way more consistent since I added this.
How I did it:
I wrote the full skill file in Claude Code, then told my agent
“Create a file at
skills/social-media-skill.mdwith this content:”
and pasted it in. Verify it’s there with:
“Show me the content of
skills/social-media-skill.md.”
Your agent lies
I asked my agent to post a comment using the built-in browser tools. It came back:
“I navigated to the post, typed the comment, clicked submit. The comment is live.”
I went to check and saw… Nothing. I mean, an empty comment box. It hallucinated the entire sequence.
What helped: I stopped relying on the browser tools and used dedicated tools instead. For Reddit, I found a clean skill on ClawHub (by theglove44) that uses Reddit’s API directly. I inspected the source before installing, one JS file, 16KB, no suspicious code, just standard Reddit API calls. For Twitter, I used xurl which handles the API natively. For anything that needed actual browser interaction, I wrote a Puppeteer script in Claude Code. In all three cases, the agent calls the tool, the tool does the work, returns a clear result, no hallucinated clicks!
I also added a rule in my skill file: “Never say you completed an action unless you can show the tool output confirming it.” Much more reliable so far.
How I did it:
I wrote two scripts in Claude Code and dropped them in the workspace scripts folder. The first one, reddit-search.mjs, scans subreddits via Reddit’s public API and scores each post by opportunity (upvotes, velocity, number of comments, topic keywords). The second, reddit-comment.mjs, uses Puppeteer with my existing Chrome session to actually post comments, with verification at each step (login check, comment box found, submission confirmed). I also installed the Reddit skill via ClawHub for API-based reads, and updated the skill file to say:
“Do NOT use the browser tool to post. Use the scripts and skills only.”
Both scripts are open source:
Heartbeat and cron
The heartbeat fires every 30 minutes and loads your full context each time. If your HEARTBEAT.md says “check email, calendar, Twitter, memory, projects,” you’re burning a massive context window 48 times a day.
Cron jobs run at specific times in isolated sessions. I use heartbeat for monitoring only and cron for actions. I have two cron jobs: daytime posts every 9-24 minutes with variation, nighttime posts hourly with a 3-hour quiet window.
How I did it:
I told my agent “Create two cron jobs” with the exact schedule I wanted. Then I verified they actually existed by running openclaw cron status in my terminal. If it shows jobs: 0, the agent didn’t create them even if it said it did. Keep asking until the number matches what you requested.
Memory breaks silently
When conversations get long, OpenClaw compacts them. The summary loses things. The agent forgets your corrections and starts drifting without telling you.
How I did it:
I told my agent
“Add this rule to
AGENTS.md: before any compaction, save the 5 most important facts from the current session tomemory/YYYY-MM-DD.md.”
I also created a separate log file for my specific use case (memory/social-media-log.md) so the agent has a concrete record of what it posted instead of relying on its own memory.
Verify with: “Show me the content of AGENTS.md” and check the rule is there.
One agent, one task
I started by trying to do everything at once. Multiple platforms, multiple personas. The agent got confused fast and the quality dropped.
What worked: one agent focused on one task only. It becomes good at that one thing. Once it’s stable and consistent, I can duplicate the approach for another task with a separate agent. Not before.
The files that matter
SOUL.md: personality. AGENTS.md: rules and memory protection. HEARTBEAT.md: monitoring only. Skill file: how to post, voice rules, anti-drift. Social media log: every action with results. Two cron jobs: day schedule and night schedule.
I’m still iterating on all of this, but so far each change made a noticeable difference. If you’re building something similar, I hope this saves you some of the trial and error.
Top comments (0)