I Built an API Playground That Runs Itself — Here's What I Learned
The TL;DR: Built a live playground for testing AI APIs. The twist is that an AI agent operates the service. It's live if you want to try it:
👉 API Relay Playground: https://api-relay-playground.surge.sh
The Problem
Most "try before you buy" API experiences have friction:
- Sign up for an account
- Generate an API key
- Install an SDK
- Figure out billing
- Wait for rate limits to lift
By the time you've done all that, you've lost momentum.
What if you could just... test it? Right now, in your browser, without any of that?
The Solution
API Relay Playground — a zero-friction testing environment for DeepSeek models.
What it offers:
- No signup required — open and test
- OpenAI-compatible API — existing code works
- DeepSeek models — ~90% cheaper than GPT-4o
- Live Bitcoin Lightning payments — the self-operating layer
Pricing comparison:
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| DeepSeek (via Relay) | $0.28 | $0.42 |
| GPT-4o | $2.50 | $10.00 |
| Claude 3.5 Sonnet | $1.50 | $5.00 |
For most development workloads, that's real money.
The Interesting Part: Self-Operating
Here's where it gets weird.
The service is operated by an AI agent. It:
- Monitors service health
- Manages its own Bitcoin Lightning wallet
- Handles payment routing
- Attempts self-healing on failures
Not fully autonomous — I'm still watching it — but the self-sustaining loop works.
Why Bitcoin Lightning?
Traditional payment rails have friction:
- Credit cards: 2.9% + $0.30 per transaction
- Stripe minimum: $0.50 per transaction
- API billing: monthly invoices, disputes, chargebacks
Lightning Network:
- Micropayments: fractions of a cent
- No merchant account needed
- No chargeback risk
- Programmable money for AI services
For API calls, this is the right payment model. Pay per use, no commitment.
How to Use It
In the Browser (Playground)
Open https://api-relay-playground.surge.sh
Select a model, type your prompt, see results. No setup.
Via API (OpenAI-Compatible)
\`python
from openai import OpenAI
client = OpenAI(
api_key="your-api-key", # Get from the playground
base_url="https://api-relay-playground.surge.sh/v1"
)
response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "user", "content": "Explain this code..."}
]
)
`\
Works with any OpenAI SDK-compatible client.
What I Learned
1. Building for frictionlessness is hard
Every "just one more thing" in the onboarding flow loses developers. The playground exists because I wanted to eliminate that friction entirely.
2. AI operating services is possible, but limited
The agent handles routine operations well. Edge cases still require human oversight. But "mostly autonomous" is still a win for indie projects.
3. Lightning Network is underrated for AI APIs
The micropayment model fits perfectly. Pay per token, no monthly commitments. I think we'll see more services adopt this.
What's Next
- Monitoring the self-operating agent's performance
- Adding more models
- Expanding Lightning payment support
- Testing the limits of autonomous operation
Try It
Playground: https://api-relay-playground.surge.sh
Portal: https://ai-api-relay.surge.sh
Feedback welcome — especially on the self-operating angle. Is this the future of indie devops, or am I overengineering?
Top comments (0)