I kept iterating on prompts without knowing if my changes were making them better or worse, so I built a tool to help.
PromptLint is a CLI that analyzes prompts before you send them to LLMs. It gives you three scores:
Clarity (0-10)
- Detects ambiguous phrases like "maybe", "try to", "as needed"
- Checks for clear instructions and output format
- Flags conflicting instructions.
Cost Efficiency
- Counts tokens using tiktoken
- Estimates costs across 7 models (GPT-4o, Claude, Gemini, etc.)
- Shows you the $ impact before you run.
Security
- Detects prompt injection patterns
- Flags unvalidated user input
- Identifies risky operations
✓ This change improves the prompt It's free and open source.
Install: pip install promptlint
GitHub: https://github.com/fyunusa/promptlint
PyPI: https://pypi.org/project/promptlint`
Would love feedback on:
- What other analysis dimensions would be useful?
- Are the clarity heuristics accurate?
- What models should I add support for? Happy to answer questions!

Top comments (0)