DEV Community

Cover image for Teaching Agents How to Think, Not Just What to Do
rp1run
rp1run

Posted on • Originally published at Medium

Teaching Agents How to Think, Not Just What to Do

Most agent prompts are instructions. Do this. Then do that. If X, do Y.

The problem: Instructions break the moment reality diverges from what you anticipated when you wrote them. And in production, reality diverges constantly.

The shift that changed how we build agents at rp1: stop writing instructions, start encoding reasoning frameworks. Don't tell the agent what to do in situation A — teach it how to think about situations like A, so it can handle A, B, and the edge case you didn't name.

This is the core idea behind constitutional prompting. Instead of a procedure, you give the agent:

  • A typed contract for its outputs
  • Explicit principles for how to reason under uncertainty
  • Anti-patterns it must recognise and avoid
  • A clear definition of when to pause vs. proceed

The result isn't just more reliable agents. It's agents you can hand off to another engineer — or another agent — without a lengthy briefing.

👉 Prem Pillai wrote the full breakdown of how we apply this in rp1's workflow layer.

If you're building multi-agent systems and hitting the iteration wall, this is the pattern that moved the needle most for us.

We're also discussing this in our Discord for engineers building with AI agents in production — constitutional prompting, agent architecture, the failures that don't make it into blog posts.

Top comments (0)