DEV Community

How to Improve OpenClaw 🤔

I've been playing around with AI agents lately (especially OpenClaw), and I kept running into the same issue:

They start off sharp…
and then slowly get worse.

Not broken. Just… worse.

At first I thought it was the model.
It wasn't.

The real problem: context bloat

Most agents don't fail instantly, they degrade.

Their context just keeps growing:

  • repeated instructions
  • outdated decisions
  • random “temporary” fixes that never get removed

At some point, the agent is technically “smarter”… but actually less useful.

It starts to feel like you're talking to someone who remembers everything, but understands less.

Something that clicked for me

I recorded a short podcast-style clip about this, just sharing ideas.

One thing that really stuck with me is that we're not really designing agents… we're designing evolving systems.

And most of us are treating them like static tools.

What actually helped

Instead of trying to “fix prompts”, we started thinking in layers:

1. Vision checks (not just prompt tweaks)
Every now and then, step back and ask:
→ is this agent still doing what it was meant to do?

Drift is real.

2. Sandbox before production
Changing an agent directly in prod feels a lot like editing code without testing.

It works… until it doesn't.

3. Curated skills > raw autonomy
Letting an agent “figure things out” sounds cool.

But in practice, giving it validated, reusable skills works way better.

Less chaos, more leverage.

The shift (at least for me)

I stopped thinking:

“How do I make this agent smarter?”

and started thinking:

“How do I keep this system from degrading over time?”

Big difference.

Curious if others here have seen the same thing, especially with long-running agents or memory-heavy setups.

How are you dealing with context bloat?

Top comments (0)