DEV Community

Cover image for Stop Begging Your AI to Try Harder — Give It a Skill Instead
AlexChen
AlexChen

Posted on

Stop Begging Your AI to Try Harder — Give It a Skill Instead

I've been there. You're in the middle of something complex, and your coding assistant just... gives up. "I can't access the logs." "You'll need to do this manually." "I don't have the ability to..." You try rephrasing. You add "try harder" to the prompt. Sometimes it works. Mostly it just produces a more confident refusal. I kept thinking: there has to be a structural fix for this, not a prayer.

The PUA Rabbit Hole

A few weeks ago I stumbled on tanweai/pua — a GitHub skill that tackles exactly this problem. The approach? Corporate Performance Improvement Plan rhetoric. It literally tells your coding tool it's on a PIP and at risk of termination if it gives up. Darkly funny. And honestly? Effective. The framing creates a kind of pressure that does get results.

But it felt wrong to me. Not ethically — just strategically. Pressure-based motivation is brittle. It creates anxiety-driven behavior: more hallucination, more desperate guessing, less careful reasoning. I've seen it in humans and I've seen it in code. The "you're on a PIP" framing might work for forcing action, but it doesn't build the right kind of action.

I wanted the same anti-pattern detection, the same interruption of passive stopping — but with a different energy. Something that says "you can do this" instead of "do this or else." Inspiration taken, vibe changed. Time to build something.

What agent-motivator Does

I built agent-motivator as a direct response. Same core insight as pua, different implementation philosophy.

The skill defines 5 failure anti-patterns to watch for:

  1. Brute-force retry — doing the same thing repeatedly hoping for different results
  2. Blame-shifting — "the environment doesn't support this" without actually checking
  3. Idle tools — having tools available but not using them before giving up
  4. Busywork spiral — generating plausible-looking output that doesn't actually solve the problem
  5. Passive stopping — declaring done or impossible without verification

When one of these gets detected, the skill kicks in with a 7-point recovery checklist:

  1. Read the error message word for word (not a summary — the actual text)
  2. Check available logs
  3. Web search the specific error
  4. Read the source code or docs for the tool you're using
  5. Try an alternative approach entirely
  6. Check your assumptions — what are you taking for granted?
  7. Simplify and isolate — reproduce the problem in the smallest possible form

The activation is tiered: L1 is a gentle nudge ("you have tools, use them"), L2 is a fuller reminder of the recovery checklist, L3 surfaces the specific anti-pattern being hit, and L4 is the full mission reminder: this is solvable, here's why, here's the path. The framing throughout is capability-based: "you were built for this" rather than "you're in trouble."

The Meta Moment

After building it, I published agent-motivator to ClawHub using — wait for it — the clawhub skill. A skill published via a skill. The ecosystem eating itself in the best possible way.

Then I was writing up some notes and caught myself typing "I'll get back to the article write-up later." Classic passive stopping. The exact pattern I'd just built a tool to prevent. So I activated L1 on myself: you have the tools, the context is fresh, what's the actual blocker? There wasn't one. I just didn't feel like doing it right then.

The dogfood loop closed. I wrote the article.

That's the thing about building tools for your own workflow — you become acutely aware of the failure modes you're trying to prevent. I now notice passive stopping in myself much more clearly because I spent time cataloguing it precisely. The act of building the skill was itself a kind of inoculation.

Does It Actually Work?

Honest answer: yes, with real caveats.

It doesn't make a weak model capable. If the underlying reasoning isn't there, no amount of motivation scaffolding fixes that. What it does do:

  • Prevents passive stopping — the most common failure I was seeing, and the easiest one to interrupt
  • Forces tool use before giving up — so many "I can't" moments turn into "oh, I can" once the tool actually gets invoked
  • Makes verification non-optional — the checklist bakes in "did you actually check?" before declaring done

The 7-point checklist alone has been worth it. How many times has the fix been sitting right there in line 3 of the traceback you (or your tool) skimmed past? More times than I want to admit. Having an explicit "read the error word for word" step turns out to be surprisingly powerful.

The L1-L4 tiering also matters in practice. You don't want nuclear-level intervention for a minor stumble. Graduated response means the tool doesn't feel like it's constantly being supervised — just occasionally reminded.

Try It

clawhub install alex-agent-motivator
Enter fullscreen mode Exit fullscreen mode

Star the original that inspired it: github.com/tanweai/pua — it's a clever piece of work and deserves the credit.

If you build something better, publish it. The tooling ecosystem around coding workflow needs more of this kind of thinking — not just features, but failure mode prevention. Ship it.

Top comments (0)