DEV Community

Арсений Перель
Арсений Перель

Posted on

I built a prompt refactoring engine using a Proposer–Critic–Verifier pipeline

I’ve been experimenting with a simple idea:

Maybe many unstable LLM outputs are caused not by the model itself, but by badly structured prompts.

So I built a web tool that refactors messy prompts into structured prompt specifications.

Instead of asking the model to “improve” a prompt once, the system runs an optimization loop:

  • Proposer restructures the prompt
  • Critic evaluates clarity, structure, and task definition
  • Verifier checks consistency
  • Arbiter decides whether another iteration is needed

The output is a structured prompt spec with:

  • sections
  • explicit requirements
  • output constraints
  • improved clarity

The full optimization usually takes around 30–40 seconds.

Demo:
https://how-to-grab-me.vercel.app/

What I’m trying to validate now is simple:
Should prompt refactoring become a standard preprocessing layer for LLM workflows?

Top comments (0)