Wikipedia is the largest encyclopedia ever created — 6.8 million English articles. And it has a completely free API with no rate limits and no API key.
Here is how to use it.
Get a Summary of Any Topic
curl -s "https://en.wikipedia.org/api/rest_v1/page/summary/Python_(programming_language)" | jq '{title: ".title, extract: .extract}'"
Response:
{
"title": "Python (programming language)",
"extract": "Python is a high-level, general-purpose programming language..."
}
No API key. No signup. Just call it.
Search Wikipedia
const response = await fetch(
'https://en.wikipedia.org/w/api.php?' + new URLSearchParams({
action: 'query',
list: 'search',
srsearch: 'machine learning',
format: 'json',
origin: '*'
})
);
const data = await response.json();
const results = data.query.search;
results.forEach(r => {
console.log(`${r.title} — ${r.snippet.replace(/<[^>]*>/g, '')}`);
});
Returns title, snippet, word count, and timestamp for each result.
Get the Full Article Content
import requests
def get_article(title):
response = requests.get('https://en.wikipedia.org/w/api.php', params={
'action': 'query',
'titles': title,
'prop': 'extracts',
'explaintext': True,
'format': 'json'
})
pages = response.json()['query']['pages']
page = next(iter(pages.values()))
return page.get('extract', 'Not found')
article = get_article('Web scraping')
print(article[:500])
The explaintext parameter gives you plain text instead of HTML.
Get Article Images
def get_images(title):
response = requests.get('https://en.wikipedia.org/w/api.php', params={
'action': 'query',
'titles': title,
'prop': 'images',
'format': 'json'
})
pages = response.json()['query']['pages']
page = next(iter(pages.values()))
return [img['title'] for img in page.get('images', [])]
images = get_images('Python (programming language)')
for img in images[:5]:
print(img)
Get Random Articles
curl -s "https://en.wikipedia.org/api/rest_v1/page/random/summary" | jq '{title: .title, extract: .extract}'
Great for building quiz apps, trivia games, or content discovery tools.
Real Use Cases
- Content enrichment — Add Wikipedia summaries to your app (product pages, educational platforms)
- Knowledge graphs — Extract structured data from Wikidata (linked to Wikipedia)
- Research tools — Search and extract academic topics programmatically
- Quiz/trivia apps — Random article endpoint is perfect for this
- SEO research — Find what topics have Wikipedia pages (high-authority content signals)
Multi-Language Support
Just change the subdomain:
# French
curl "https://fr.wikipedia.org/api/rest_v1/page/summary/Python"
# German
curl "https://de.wikipedia.org/api/rest_v1/page/summary/Python"
# Japanese
curl "https://ja.wikipedia.org/api/rest_v1/page/summary/Python"
300+ languages available.
Tips for Using the API
-
Use the REST API (
/api/rest_v1/) for simple operations — it is faster and cleaner -
Use the Action API (
/w/api.php) for advanced queries — more powerful but verbose - Set a User-Agent header — Wikipedia asks for it in their guidelines
- Cache responses — Article content does not change every minute
- Respect guidelines — No more than 200 requests/second (you will never hit this manually)
What would you build with this?
I am curious — if you had access to 6.8 million articles via API, what would you create? A research tool? A chatbot? Something else entirely?
I write about free APIs, web scraping, and developer tools. Follow for weekly discoveries.
More free APIs: 8 Free APIs That Are Genuinely Useful
More from me: 10 Dev Tools I Use Daily | 77 Scrapers on a Schedule | 150+ Free APIs
Top comments (0)