Every developer has that one API trick they discovered way too late.
For me, it was learning that most websites have hidden JSON endpoints you can use instead of scraping HTML.
For example:
- Add
.jsonto any Reddit URL → structured data, no scraping needed - GitHub's API gives you 5,000 requests/hour with a free token
- Dev.to's API lets you programmatically publish articles (that's how I post!)
- Most news sites have RSS feeds that are basically free APIs
Here's a quick example — getting Reddit data without any API key:
import requests
url = "https://www.reddit.com/r/programming/top.json?limit=10&t=week"
headers = {"User-Agent": "MyApp/1.0"}
response = requests.get(url, headers=headers)
posts = response.json()["data"]["children"]
for post in posts:
data = post["data"]
print(f"{data['score']:>5} | {data['title'][:60]}")
No API key. No OAuth. No rate limit headaches (if you're reasonable).
My favorite "hidden" APIs:
-
Reddit
.jsonendpoint — free, no auth - Hacker News Firebase API — real-time, free forever
- Wikipedia API — unlimited, no key needed
- crt.sh — free SSL certificate search (great for security research)
- npms.io — npm package analysis without npm's rate limits
The tip I wish I knew earlier:
Always check for an API before you scrape. I spent weeks building HTML scrapers for sites that had perfectly good JSON APIs hiding in plain sight. Literally just open DevTools → Network tab → filter by XHR/Fetch → find the API endpoints the site itself uses.
Your turn! What's your "I wish I knew this earlier" API tip?
Could be:
- A hidden endpoint you discovered
- A rate limit workaround
- An API that replaced hours of manual work
- A free API that people pay for alternatives to
Drop it in the comments — I'm building a collection of these tips for a comprehensive guide. I'll credit everyone who contributes! 👇
Top comments (0)