Have you ever woken up to a massive API bill because your AI agent got stuck in a loop? I have.
Recently, I spent $47 debugging a LangGraph retry loop. The agent kept failing, LangGraph kept retrying, and OpenAI kept charging meβall while I was fast asleep.
I realized we need better guardrails for AI agents. So, I built Shekel so you don't have to learn that expensive lesson yourself.
What is Shekel? πͺ
Shekel is a zero-config, open-source Python library for LLM budget enforcement and cost tracking. It works with LangGraph, CrewAI, AutoGen, or any framework that calls OpenAI, Anthropic, or LiteLLM.
The best part? It takes exactly one line of code.
How it works
You wrap your agent execution in a budget context manager. If your agent hits the max USD spend, it stops the execution by raising a BudgetExceededError.
from shekel import budget
# Enforce a hard cap of $1.00
with budget(max_usd=1.00):
run_my_agent() # raises BudgetExceededError if spend exceeds $1.00
Don't want to change your code? You can enforce it right from the CLI:
shekel run agent.py --budget 1
Cool Features π
Beyond basic hard caps, I built Shekel to handle real-world agentic workflows:
-
Smart Fallback: Automatically switch to cheaper models (like
gpt-4o-mini) instead of crashing when you hit 80% of your budget. - Nested Budgets: Track multi-stage workflows hierarchically (e.g., $2 for research, $5 for analysis).
- Tool Budgets: Cap the number of tool calls before they bankrupt you.
- OpenTelemetry & Langfuse: Export cost, utilization, and spend rates directly to your observability backend.
Try it out!
You can install it via pip:
pip install shekel
# or [anthropic], [litellm], [all]
Check out the GitHub repository (I'd love a star if you find it useful!) or read the full documentation.
I'd love to hear your thoughts: What's your worst "accidental cloud bill" story? Let me know in the comments! π
Top comments (0)