markdown
Every prompt I wrote was garbage.
Not because I don't know prompt engineering — I do. I just couldn't be bothered to write <role>, <constraints>, <success_criteria> every single time. So I'd type "build me a dashboard" and wonder why Claude gave me something I had to rewrite.
Sound familiar?
The problem is friction, not knowledge
You know a good prompt needs:
- A role definition
- Clear task description
- Explicit constraints
- Success criteria
- Context about your project
But writing all that for every task? Nobody does it consistently. So we all default to lazy prompts and get lazy outputs.
What I built
RePrompter is a skill file — not a SaaS, not an app, not a VS Code extension. It's a 1000-line SKILL.md that teaches your LLM to interview you before generating a prompt.
Here's what it looks like:
How it works
You type your messy prompt — "uhh build me a real-time analytics dashboard, needs charts and stuff, maybe websockets"
It asks 4 smart questions — not generic fluff. If you mention "tracking", it asks tracking questions. If you mention "API", it asks API questions. Clickable options, not free text.
It detects complexity — single file change? Quick mode, no interview. Frontend + backend + tests? Auto-suggests team mode with parallel agents.
It generates a structured prompt — XML-tagged output with
<role>,<task>,<constraints>,<success_criteria>. Ready to execute.It scores quality — before vs after, on 6 dimensions. Typical improvement: 1.6/10 → 9.0/10.
The team mode is where it gets interesting
When RePrompter detects your task spans multiple systems, it doesn't just write one prompt. It generates:
- A team coordination brief with handoff rules
- Per-agent sub-prompts with scoped responsibilities
- Shared contracts so agents don't drift
One messy sentence → 3 agents working in parallel with coordination rules.
Quality scoring
Every transformation is scored on 6 dimensions:
| Dimension | Weight |
|---|---|
| Clarity | 20% |
| Specificity | 20% |
| Structure | 15% |
| Constraints | 15% |
| Verifiability | 15% |
| Decomposition | 15% |
Most rough prompts score 1-3. RePrompter typically outputs 8-9+.
Installation (30 seconds)
bash
mkdir -p skills/reprompter
curl -sL https://github.com/AytuncYildizli/reprompter/archive/main.tar.gz | \
tar xz --strip-components=1 -C skills/reprompter
Works with Claude Code (auto-discovers SKILL.md), OpenClaw, or any LLM. Zero dependencies.
What it's NOT
- Not a SaaS with a monthly fee
- Not a Chrome extension
- Not a prompt library you copy-paste from
- Not model-specific — works with Claude, GPT, Gemini, anything
It's a behavioral spec that makes your LLM do the boring work of prompt engineering for you.
Try it
⭐ github.com/AytuncYildizli/reprompter
MIT licensed. PRs welcome — someone already submitted a bug fix on day one.
What's the laziest prompt you've ever written that actually worked? I'm curious.

Top comments (1)
We love what you're building, always on the 0G stack...
0gm from the team at the Hubs...