Building autonomous agents with LLMs is exciting, but let's be honest: external APIs are unpredictable.
You've probably seen your agentic workflow crash because of a random TimeoutError, a ConnectionError, or the dreaded Rate Limit. In production, "trying again manually" isn't an option.
Last night, I built and released Veridian Guard — a lightweight, zero-dependency safety layer designed specifically to handle these failures gracefully.
The Problem: Flaky APIs & Bloated Code
Traditionally, you'd wrap every call in a try-except block with a while loop for retries. It works, but it makes your code messy and hard to maintain — especially when dealing with complex asynchronous agent frameworks like LangChain or CrewAI.
The Solution: Veridian Guard 🌿
Veridian Guard provides a robust @guard decorator that manages retries, delays, and fallbacks with just one line of code.
🚀 Quick Start
bash
pip install veridian-guard
Wrap any flaky function, and it's protected:
from veridian.guard import guard
import random
@guard(max_retries=3, delay=1.0, fallback="Default safe response")
def call_llm_agent():
if random.random() < 0.7:
raise ConnectionError("LLM API Timeout!")
return "Agent succeeded!"
print(call_llm_agent())
⚡ Seamless Async/Await Support
One of the features I'm most proud of is its automatic detection. Whether your function is synchronous (def) or asynchronous (async def), Veridian Guard knows exactly how to handle it. No extra configuration needed.
import asyncio
from veridian.guard import guard
@guard(max_retries=3, delay=2.0, fallback={"status": "failed"})
async def fetch_data_from_llm():
await asyncio.sleep(1)
raise TimeoutError("API is too busy!")
async def main():
result = await fetch_data_from_llm()
print(result) # Output: {'status': 'failed'}
asyncio.run(main())
✨ Why Veridian Guard?
Zero Dependencies — Pure Python. Keeps your environment clean and lightweight.
Smart Logging — Automatically logs failed attempts so you can monitor where your agent is struggling.
Fail-Safe Fallbacks — Ensure your main application loop never crashes again.
Error Tolerance — Focus on the logic; let Guard handle the instability.
🛠️ Get Involved
I built this to solve a real pain point in my own AI projects at Vyno AI, and I hope it helps the community build more reliable autonomous systems.
I'd love to hear your feedback, suggestions, or see your contributions!
⭐ GitHub:(https://github.com/ozereray/veridian)
📦 PyPI: veridian-guard
Happy coding! 🌿🛡️
Top comments (0)