Every Python developer eventually needs to summarize text at scale — customer feedback, support tickets, news articles, research papers. The typical path is OpenAI ($20/month minimum) or building your own pipeline (weeks of work).
There's a simpler option: a pay-per-use REST API that costs $0.01 per summarization with no subscription, no setup overhead, and a free tier to test with.
The math
1 USDC = 1,000 credits
Summarize = 10 credits ($0.01 per call)
1,000 documents = $10 total
No monthly fee. No minimum commitment.
30-second quickstart
First, get a free API key (100 demo credits included — no payment needed):
curl -X POST https://textai-api.overtek.deno.net/keys/create \
-H "Content-Type: application/json" \
-d '{"label":"my-python-project"}'
Response:
{"apiKey": "sk_abc123...", "credits": 100, "demo": true}
Python client
import requests
API_KEY = "sk_abc123..." # from the step above
BASE_URL = "https://textai-api.overtek.deno.net"
def summarize(text: str, sentences: int = 3) -> str:
response = requests.post(
f"{BASE_URL}/summarize",
headers={"X-API-Key": API_KEY, "Content-Type": "application/json"},
json={"text": text, "sentences": sentences}
)
response.raise_for_status()
return response.json()["summary"]
def extract_keywords(text: str, max_keywords: int = 10) -> list[str]:
response = requests.post(
f"{BASE_URL}/keywords",
headers={"X-API-Key": API_KEY, "Content-Type": "application/json"},
json={"text": text, "maxKeywords": max_keywords}
)
response.raise_for_status()
return response.json()["keywords"]
Batch processing example
Here's a realistic pipeline that processes 1,000 documents, checks credit balance, and tops up if needed:
import requests
import time
from pathlib import Path
API_KEY = "sk_abc123..."
BASE_URL = "https://textai-api.overtek.deno.net"
HEADERS = {"X-API-Key": API_KEY, "Content-Type": "application/json"}
def get_balance() -> int:
r = requests.get(f"{BASE_URL}/credits/balance", headers=HEADERS)
return r.json()["credits"]
def summarize_batch(documents: list[str], delay_ms: int = 100) -> list[dict]:
results = []
for i, doc in enumerate(documents):
try:
r = requests.post(
f"{BASE_URL}/summarize",
headers=HEADERS,
json={"text": doc, "sentences": 2}
)
r.raise_for_status()
data = r.json()
results.append({
"index": i,
"summary": data["summary"],
"credits_used": data["creditsUsed"],
"credits_remaining": data["creditsRemaining"]
})
if (i + 1) % 100 == 0:
print(f"Processed {i+1} docs — {data['creditsRemaining']} credits left")
time.sleep(delay_ms / 1000)
except requests.HTTPError as e:
if e.response.status_code == 402:
print(f"Out of credits at doc {i}. Top up at POST /credits/buy")
break
results.append({"index": i, "error": str(e)})
return results
# Process documents
docs = [Path(f).read_text() for f in Path("./documents").glob("*.txt")]
print(f"Starting batch: {len(docs)} documents, ~{len(docs) * 10} credits needed")
print(f"Current balance: {get_balance()} credits")
summaries = summarize_batch(docs)
print(f"Done! Processed {len(summaries)} documents")
Checking your credit balance
r = requests.get(
"https://textai-api.overtek.deno.net/credits/balance",
headers={"X-API-Key": API_KEY}
)
print(r.json())
# {"credits": 85, "apiKey": "sk_abc123...", "demo": true}
Topping up with USDC
When you're ready to process at scale (beyond the free 100 credits), top up with USDC on Solana devnet:
# Step 1: Get wallet address
r = requests.post(
"https://textai-api.overtek.deno.net/credits/buy",
headers=HEADERS,
json={"amount": 1} # 1 USDC = 1,000 credits
)
print(r.json())
# {"walletAddress": "ABC...", "amount": 1, "network": "devnet"}
# Step 2: Send 1 USDC to that address (via any Solana wallet)
# Step 3: Confirm
r = requests.post(
"https://textai-api.overtek.deno.net/credits/confirm",
headers=HEADERS,
json={"txSignature": "your-solana-tx-hash"}
)
print(r.json())
# {"credits": 1000, "added": 1000}
Full pricing reference
| Endpoint | Credits | Cost per 1,000 calls |
|---|---|---|
POST /summarize |
10 credits | $10 |
POST /keywords |
5 credits | $5 |
POST /translate |
15 credits | $15 |
When to use this vs. OpenAI
- Use TextAI API when: batch processing, cost predictability, no need for GPT-4 quality, want micropayments
- Use OpenAI when: you need frontier model reasoning, complex prompts, or generation (not just extraction)
For summarization and keyword extraction from structured content (support tickets, product reviews, news), a lightweight API like this is 10–50x cheaper than frontier models.
Try it: get your free API key at textai-api.overtek.deno.net and process your first 10 documents for free.
What are you building with text processing? Drop a comment below!
Top comments (0)