The post
Prompts buried in Python strings. No git history. No way to diff two versions. No clean way to swap models.
I got tired of it. So I built prompt-run — a CLI tool that treats .prompt files as first-class runnable artifacts.
Quickstart (60 seconds)
pip install "prompt-run[anthropic]"
export ANTHROPIC_API_KEY="sk-ant-..."
prompt run examples/summarize.prompt --var text="LLMs are changing software development."
What a .prompt file looks like
---
name: summarize
description: Summarizes text into bullet points
model: claude-sonnet-4-6
provider: anthropic
temperature: 0.3
max_tokens: 500
vars:
text: string
style: string = bullets
---
Summarize the following text as {{style}}:
{{text}}
YAML frontmatter for config. Plain text body with {{variable}} syntax. That's the whole format.
The file lives in your repo. It is versioned by git. It can be reviewed in a PR. Anyone cloning your project can run it.
Commands
prompt run — run a prompt against any LLM
# Basic
prompt run summarize.prompt --var text="Your text here"
# Override model — no file changes needed
prompt run summarize.prompt --model gpt-4o --provider openai
# Dry run — preview the resolved prompt without sending it
prompt run summarize.prompt --var text="Hello" --dry-run
# Stream tokens as they arrive
prompt run summarize.prompt --var text="Hello" --stream
# Pipe input from stdin
cat article.txt | prompt run summarize.prompt
prompt diff — side-by-side output comparison
This is the feature I use most. When iterating on a prompt, you can compare two runs without leaving your terminal:
# Same prompt, two different inputs
prompt diff summarize.prompt \
--a-var text="First article..." \
--b-var text="Second article..."
# Two prompt versions, same input — great for regression testing
prompt diff summarize.v1.prompt summarize.v2.prompt \
--var text="Same input for both"
prompt validate — catch errors before runtime
prompt validate summarize.prompt
# ✓ Valid YAML frontmatter
# ✓ All variables declared
# ✓ No undeclared variables in body
prompt new — scaffold a new prompt file interactively
prompt new my-prompt.prompt
Providers supported
| Provider | Install extra | Auth env var |
|---|---|---|
| Anthropic (default) | pip install "prompt-run[anthropic]" |
ANTHROPIC_API_KEY |
| OpenAI | pip install "prompt-run[openai]" |
OPENAI_API_KEY |
| Ollama (local) | pip install "prompt-run[ollama]" |
None |
Switch at runtime with --provider and --model. Nothing in the .prompt file needs to change.
Use it as a Python library too
from prompt_run import run_prompt
result = run_prompt(
"summarize.prompt",
vars={"text": "Your text here"},
model="gpt-4o",
provider="openai",
)
print(result.content)
Use in CI / GitHub Actions
- name: Smoke test prompt
run: |
pip install "prompt-run[anthropic]"
prompt run prompts/classify.prompt --var input="test input" --dry-run
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
What this is NOT
I want to be explicit about scope:
- No web dashboard
- No prompt storage database
- No versioning system (git handles that)
- No evaluation/scoring framework
- No account required
- No telemetry — zero, none, nothing phones home
Why not LangChain / promptfoo / Langfuse?
These are great tools. But they all add abstraction layers and, usually, a platform or account. prompt-run is deliberately minimal — it does exactly one thing: lets you run .prompt files from the terminal. No opinions beyond that.
Links
- GitHub: https://github.com/Maneesh-Relanto/Prompt-Run
- PyPI: https://pypi.org/project/prompt-run/
- License: MIT
Feedback, issues, and PRs welcome. What features would you want to see?
Top comments (0)