DEV Community

Alex Spinov
Alex Spinov

Posted on

Wikipedia Has a Free API — Extract Any Article, Summary, or Image Programmatically

Wikipedia is the largest encyclopedia ever created — 6.8 million English articles. And it has a completely free API with no rate limits and no API key.

Here is how to use it.


Get a Summary of Any Topic

curl -s "https://en.wikipedia.org/api/rest_v1/page/summary/Python_(programming_language)" | jq '{title: ".title, extract: .extract}'"
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "title": "Python (programming language)",
  "extract": "Python is a high-level, general-purpose programming language..."
}
Enter fullscreen mode Exit fullscreen mode

No API key. No signup. Just call it.


Search Wikipedia

const response = await fetch(
  'https://en.wikipedia.org/w/api.php?' + new URLSearchParams({
    action: 'query',
    list: 'search',
    srsearch: 'machine learning',
    format: 'json',
    origin: '*'
  })
);

const data = await response.json();
const results = data.query.search;

results.forEach(r => {
  console.log(`${r.title}${r.snippet.replace(/<[^>]*>/g, '')}`);
});
Enter fullscreen mode Exit fullscreen mode

Returns title, snippet, word count, and timestamp for each result.


Get the Full Article Content

import requests

def get_article(title):
    response = requests.get('https://en.wikipedia.org/w/api.php', params={
        'action': 'query',
        'titles': title,
        'prop': 'extracts',
        'explaintext': True,
        'format': 'json'
    })

    pages = response.json()['query']['pages']
    page = next(iter(pages.values()))
    return page.get('extract', 'Not found')

article = get_article('Web scraping')
print(article[:500])
Enter fullscreen mode Exit fullscreen mode

The explaintext parameter gives you plain text instead of HTML.


Get Article Images

def get_images(title):
    response = requests.get('https://en.wikipedia.org/w/api.php', params={
        'action': 'query',
        'titles': title,
        'prop': 'images',
        'format': 'json'
    })

    pages = response.json()['query']['pages']
    page = next(iter(pages.values()))
    return [img['title'] for img in page.get('images', [])]

images = get_images('Python (programming language)')
for img in images[:5]:
    print(img)
Enter fullscreen mode Exit fullscreen mode

Get Random Articles

curl -s "https://en.wikipedia.org/api/rest_v1/page/random/summary" | jq '{title: .title, extract: .extract}'
Enter fullscreen mode Exit fullscreen mode

Great for building quiz apps, trivia games, or content discovery tools.


Real Use Cases

  1. Content enrichment — Add Wikipedia summaries to your app (product pages, educational platforms)
  2. Knowledge graphs — Extract structured data from Wikidata (linked to Wikipedia)
  3. Research tools — Search and extract academic topics programmatically
  4. Quiz/trivia apps — Random article endpoint is perfect for this
  5. SEO research — Find what topics have Wikipedia pages (high-authority content signals)

Multi-Language Support

Just change the subdomain:

# French
curl "https://fr.wikipedia.org/api/rest_v1/page/summary/Python"

# German
curl "https://de.wikipedia.org/api/rest_v1/page/summary/Python"

# Japanese
curl "https://ja.wikipedia.org/api/rest_v1/page/summary/Python"
Enter fullscreen mode Exit fullscreen mode

300+ languages available.


Tips for Using the API

  1. Use the REST API (/api/rest_v1/) for simple operations — it is faster and cleaner
  2. Use the Action API (/w/api.php) for advanced queries — more powerful but verbose
  3. Set a User-Agent header — Wikipedia asks for it in their guidelines
  4. Cache responses — Article content does not change every minute
  5. Respect guidelines — No more than 200 requests/second (you will never hit this manually)

What would you build with this?

I am curious — if you had access to 6.8 million articles via API, what would you create? A research tool? A chatbot? Something else entirely?


I write about free APIs, web scraping, and developer tools. Follow for weekly discoveries.

More free APIs: 8 Free APIs That Are Genuinely Useful


More from me: 10 Dev Tools I Use Daily | 77 Scrapers on a Schedule | 150+ Free APIs

Top comments (0)