I didn’t build prtr because I wanted “an AI coding product.”
I built it because I got tired of doing the same annoying glue work over and over again.
A test would fail.
I’d copy the logs.
Then I’d open Claude or ChatGPT or Gemini.
Then I’d explain the problem again.
Then I’d try a second model.
Then I’d take the answer and manually turn it into the next step.
The first prompt was rarely the hard part. The hard part was everything around it.
That repeated context-packing started to feel like its own task, and I realized I was spending too much energy moving information between tools instead of actually thinking about the problem.
So I started building a small CLI to make that loop cheaper.
That tool became prtr.
The problem I wanted to solve
I wasn’t trying to replace IDEs.
I wasn’t trying to build a full autonomous agent.
And I wasn’t trying to invent another chat UI.
I just wanted a tool that could help with this kind of loop:
- Take intent from me.
-
Take logs from
stdin. - Add lightweight repo context automatically.
- Prepare the next useful prompt without me thinking about formatting.
- Let me compare another model without rebuilding everything.
- Turn the answer into the next action.
That’s it.
A lot of AI tooling feels like it wants to own the whole workflow. I wanted something smaller than that. Something terminal-native. Something that fit into the way I already work.
Why I chose Go
This felt like a Go project pretty quickly.
I wanted a compiled CLI that would stay fast, portable, and "boring" in the best way. Go made handling command structures, stdin, clipboard, and cross-platform behavior feel incredibly straightforward.
It was also a nice way to keep myself honest. If the idea only worked when wrapped in a giant framework, it probably wasn’t the right tool shape.
What prtr looks like now
The core loop ended up being four simple verbs:
-
go: Build the first useful prompt from intent + stdin + repo context. -
swap: Resend the same run to another AI app (compare results!). -
take: Turn a copied answer into the next action (like a patch). -
learn: Keep repo-local memory and protected terms. -
take --deep: Run a more structured multi-step pipeline before delivery, so bigger changes come with more analysis, risk checking, and test planning.
The "Happy Path":
# 1. Pipe an error, ask for a fix
npm test 2>&1 | prtr go fix "Why is this failing?"
# 2. Not satisfied? Swap to codex instantly
prtr swap codex
# 3. Apply the suggested patch
prtr take patch
Going deeper with take --deep
take is the fast path.
take --deep is the careful path.
When I use --deep, I’m usually telling prtr that the next step needs more than a quick follow-up prompt. It should add more structure around analysis, risk, testing, and delivery.
prtr take patch --deep
prtr take debug --deep
prtr take refactor --deep
patch --deep: for implementation changes that need better risk and test framing
debug --deep: for bug-hunting flows where root cause matters more than a quick guess
refactor --deep: for larger structural changes that need tighter scope and safer handoff
That example captures the real annoyance I wanted to reduce: the handoff work between steps.
Trying it without friction
I didn’t want the first experience to be “configure five things before you can tell if this is useful.” So there’s a setup-light path to see how it works:
prtr demo
prtr go "explain this error" --dry-run
What I’m still unsure about
I’m still figuring out the "product boundary."
There’s a version of this that stays a small CLI forever. There’s another version that grows into too much machinery. I’m trying to stay on the right side of that line.
That’s why I’m especially interested in feedback from people who build CLIs, live in terminals, or already have strong habits around AI-assisted coding.
If you want to take a look
I've open-sourced the project on GitHub. If you read this and think “this is useful,” “this is overbuilt,” or “this solves the wrong problem,” I’d genuinely like to hear that.
GitHub Repository:
https://github.com/helloprtr/poly-prompt
helloprtr
/
poly-prompt
The command layer for AI work. Turn logs, diffs, and intent into the next AI action across Claude, Codex, and Gemini.
poly-prompt
English README · 한국어 README · Docs Hub · Releases
One line: prtr is the command layer for AI work: turn intent, logs, and diffs into the next AI action across Claude, Codex, and Gemini.
prtr helps you send the first prompt faster and keep the loop moving after that. Instead of rebuilding the same context by hand, you can route once, compare another AI app, and turn the answer into the next structured action.
See It Work
One request in Korean. Another AI app. A structured next action. No prompt glue.
Try It in 60 Seconds
macOS with Homebrew:
brew tap helloprtr/homebrew-tap
brew install prtr
prtr demo
prtr go "explain this error" --dry-run
Linux and Windows:
Download the right archive from GitHub Releases, put prtr on your PATH, then run the same two commands above.
When you want the real loop:
npm test 2>&1…It's not perfect, but that kind of feedback is more useful to me right now than praise. What are you using to manage the "messy loop" of AI coding? Let's chat in the comments! 👇


Top comments (0)