DEV Community

Cover image for I Let OpenClaw Run My Mornings for 7 Days. It Stopped Acting Like a Chatbot on Day 4.
Kiell Tampubolon
Kiell Tampubolon

Posted on

I Let OpenClaw Run My Mornings for 7 Days. It Stopped Acting Like a Chatbot on Day 4.

OpenClaw Challenge Submission 🦞

This is a submission for the OpenClaw Writing Challenge

For three days I almost uninstalled OpenClaw. I had connected it to Telegram, GitHub, and Google Calendar, and every morning at 8 AM it sent me the most useless message I have ever received from a piece of software: "You have 4 open PRs. Consider reviewing them."

Thanks. I have eyes. The breakthrough came on day 4, and it had nothing to do with the model.

What I Built

A morning triage agent. Not a chatbot waiting for me to ask. An agent that wakes up at 8 AM, observes the state of my work across three tools, and tells me the one thing I should do first that day, with a reason.

The chatbot mindset stops at "summarize my morning." The agent mindset closes the loop and makes a decision.

How It Works

The skill definition is the contract. Plain English, declarative, and OpenClaw figures out the rest.

# skills/morning-triage/SKILL.md
name: morning-triage
description: |
  Every morning at 8 AM, look at my open GitHub PRs,
  unread Telegram messages, and today's calendar.
  Decide the ONE thing I should do first and tell me why.
triggers:
  - schedule: "0 8 * * *"
tools:
  - github
  - telegram
  - calendar
Enter fullscreen mode Exit fullscreen mode

The runner is the observe → decide → act loop. This is what every real agent looks like underneath.

# skills/morning-triage/run.py
from openclaw import Agent

class TriageAgent(Agent):
    def observe(self):
        return {
            "prs": self.tools.github.open_prs("kiel/myrepo"),
            "msgs": self.tools.telegram.unread(hours=12),
            "events": self.tools.calendar.today(),
            "now": self.now(),
        }

    def decide(self, state):
        return self.llm.ask(
            f"Given this state: {state}\n\n"
            f"What is the SINGLE most important thing I should do "
            f"in the next 90 minutes? Name the specific PR/message/event. "
            f"One sentence. Include WHY it matters."
        )

    def act(self, decision):
        self.tools.telegram.send(f"☕ Focus today: {decision}")

TriageAgent().run()
Enter fullscreen mode Exit fullscreen mode

The chatbot mindset stops at "decide" because the human is supposed to act on the answer. The agent closes the loop on its own.

What Actually Happened

Day 1. Output: "You have 4 open PRs. Consider reviewing them." Useless. I almost killed the cron.

Day 2. Same. I had given the agent enough tools to see everything but not enough instruction to choose. The model was hedging. The fault was in my prompt, not the LLM.

Day 3. I rewrote the decide step. Forced it to name a specific item. Forced it to explain why. Forced it into one sentence. Output got slightly better but still felt mechanical.

Day 4. The breakthrough. I added one extra field to observe: how long each PR had been open and how many times someone had pinged me. Suddenly the agent had real signal. The morning message read:

☕ Focus today: PR #142 has been open 9 days, blocking the v2.3 release, and Andre has pinged you twice on Telegram about it. Review it before your 10:30 standup.

I had genuinely forgotten about that PR. The agent connected three separate data sources and surfaced a thread I had dropped two weeks ago. That was the moment OpenClaw stopped being a chatbot to me.

Day 5. Different output, different decision: "Skip the deep work block, you have a 10:00 meeting that conflicts with your 10:15 review and three unread messages from your manager since last night."

Day 6. It correctly told me to do nothing urgent and protect my deep work block. That was the second moment that surprised me. A good agent has to know when not to interrupt.

Day 7. Total cost for the week was under one dollar in API spend, because the agent only fires once a day on a schedule. Chatbot usage is unbounded. Scheduled agent usage is cheap and predictable.

The Lesson Behind the Lesson

The fix was never the model. The fix was three things, in this order:

Better observation. If you do not give the agent state, it cannot make a decision. Vague input produces vague output.

Forced specificity in the prompt. "Name the specific item. Explain why. One sentence." That is the difference between a horoscope and a recommendation.

A schedule, not a conversation. The chatbot pattern bills you per question. The agent pattern bills you per outcome.

If you are typing into OpenClaw the same way you type into ChatGPT, you are using a power tool to hammer in a nail. Write one SKILL.md. Set one schedule. Close the loop. That is the unlock.


If you had to give one OpenClaw agent full control over a part of your day, what would you let it decide for you, and what would you never let it touch? I am genuinely curious where the line is for other devs. Drop it below.

Top comments (0)