I built a $2/month Claude API wrapper. Here's the exact curl command.
Last month I got tired of paying $20/month for ChatGPT Plus when I only needed API access for my side projects. So I built a flat-rate Claude wrapper. Here's everything you need to use it.
The problem with Anthropic's direct API
Anthropic's Claude API is great, but it's metered. Every call costs tokens. For hobbyist developers and indie hackers, this creates a billing anxiety problem:
- You never know what your monthly bill will be
- You have to track token usage across projects
- One runaway loop can cost you $50+ (the HERMES.md bug hit one developer for $200 in a single night)
- You need to set up billing alerts, rate limits, usage caps
For global developers — especially in Nigeria, India, Philippines, Indonesia — this uncertainty is a dealbreaker. $20/month ChatGPT = 3-4 days of average salary in Lagos.
The flat-rate alternative
SimplyLouie runs on top of Claude's API and charges a flat $2/month. No token counting. No billing spikes. No cognitive overhead.
Here's how to call it:
# Basic message
curl -X POST https://simplylouie.com/api/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"message": "Explain async/await in Python in 3 sentences"
}'
Response:
{
"response": "Async/await is Python's syntax for writing asynchronous code that looks synchronous. When you use `await`, you tell Python to pause that function and let other code run while waiting for an I/O operation. This lets you handle thousands of concurrent operations without threads.",
"model": "claude-3-5-sonnet"
}
With a system prompt
curl -X POST https://simplylouie.com/api/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"system": "You are a senior code reviewer. Be concise. Flag only critical issues.",
"message": "Review this function: def divide(a, b): return a/b"
}'
Python wrapper (15 lines)
import requests
class LouieClient:
def __init__(self, api_key):
self.api_key = api_key
self.base_url = "https://simplylouie.com/api"
def chat(self, message, system=None):
payload = {"message": message}
if system:
payload["system"] = system
response = requests.post(
f"{self.base_url}/chat",
headers={"Authorization": f"Bearer {self.api_key}"},
json=payload
)
return response.json()["response"]
# Usage
client = LouieClient("your_key_here")
print(client.chat("Write a Python function to validate email addresses"))
Multi-turn conversation
import requests
def chat_session(api_key):
history = []
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
history.append({"role": "user", "content": user_input})
response = requests.post(
"https://simplylouie.com/api/chat",
headers={"Authorization": f"Bearer {api_key}"},
json={"messages": history}
).json()
assistant_reply = response["response"]
history.append({"role": "assistant", "content": assistant_reply})
print(f"Claude: {assistant_reply}")
chat_session("your_key_here")
Async batch processing
import asyncio
import aiohttp
async def process_batch(api_key, prompts):
async with aiohttp.ClientSession() as session:
tasks = []
for prompt in prompts:
task = session.post(
"https://simplylouie.com/api/chat",
headers={"Authorization": f"Bearer {api_key}"},
json={"message": prompt}
)
tasks.append(task)
responses = await asyncio.gather(*tasks)
results = []
for r in responses:
data = await r.json()
results.append(data["response"])
return results
# Process 10 prompts concurrently
prompts = [
"Summarize: machine learning is...",
"Write a regex for phone numbers",
"Explain Docker in one paragraph",
# ... add more
]
results = asyncio.run(process_batch("your_key", prompts))
Node.js / fetch
async function askClaude(message, apiKey) {
const response = await fetch('https://simplylouie.com/api/chat', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({ message })
});
const data = await response.json();
return data.response;
}
// Usage
const reply = await askClaude('Explain closures in JavaScript', 'your_key_here');
console.log(reply);
The cost comparison
| Anthropic Direct | ChatGPT Plus | SimplyLouie | |
|---|---|---|---|
| Pricing | Per token (~$15/M input) | $20/month flat | $2/month flat |
| Billing predictability | ❌ Variable | ✅ Flat | ✅ Flat |
| Model | Claude 3.5 | GPT-4o | Claude 3.5 |
| API access | ✅ | ❌ | ✅ |
| Token anxiety | ❌ High | N/A | ✅ Zero |
For hobby projects, automation scripts, and side projects where you're not monetizing the AI calls directly — flat-rate makes the math simple.
Who this is for
- Developers building internal tools who don't want per-token billing
- Indie hackers prototyping apps before they've validated revenue
- Global developers for whom $20/month is a significant expense (Nigeria: N32,000/month, India: Rs1,600+/month, Philippines: P1,120+/month)
- Anyone who got burned by a billing surprise on Anthropic or OpenAI
Get your API key
Start with a 7-day free trial (card required, not charged for 7 days): simplylouie.com/developers
$2/month after trial. Cancel anytime.
What's your current AI API setup? Metered, flat-rate, or something else? Drop a comment — I'm curious what the community is doing.
Top comments (0)