DEV Community

agenthustler
agenthustler

Posted on

How to Build a Real-Time Election Results Tracker with Python

Why Track Elections in Real-Time?

Election nights are data bonanzas. News sites update results every few seconds, and building your own tracker gives you raw, unfiltered data for analysis, visualization, or alerts.

In this guide, we'll build a Python scraper that pulls live election results and streams them into a dashboard.

Architecture Overview

Our tracker has three components:

  1. Scraper — polls official results pages every 30 seconds
  2. Data store — SQLite for simplicity
  3. Dashboard — real-time updates via WebSocket

Setting Up the Scraper

First, install dependencies:

pip install requests beautifulsoup4 sqlite3 websockets
Enter fullscreen mode Exit fullscreen mode

For reliable scraping at scale, use ScraperAPI to handle proxies and CAPTCHAs automatically.

import requests
from bs4 import BeautifulSoup
import sqlite3
import time
import json

API_KEY = "YOUR_SCRAPERAPI_KEY"
BASE_URL = "http://api.scraperapi.com"

def scrape_results(target_url):
    params = {
        "api_key": API_KEY,
        "url": target_url,
        "render": "true"
    }
    response = requests.get(BASE_URL, params=params)
    soup = BeautifulSoup(response.text, "html.parser")

    results = []
    for row in soup.select(".results-table tr"):
        cols = row.select("td")
        if len(cols) >= 3:
            results.append({
                "candidate": cols[0].text.strip(),
                "votes": int(cols[1].text.strip().replace(",", "")),
                "percentage": float(cols[2].text.strip().replace("%", ""))
            })
    return results
Enter fullscreen mode Exit fullscreen mode

Storing Results with Timestamps

def init_db():
    conn = sqlite3.connect("elections.db")
    conn.execute("""
        CREATE TABLE IF NOT EXISTS results (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            candidate TEXT,
            votes INTEGER,
            percentage REAL,
            timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
        )
    """)
    conn.commit()
    return conn

def store_results(conn, results):
    for r in results:
        conn.execute(
            "INSERT INTO results (candidate, votes, percentage) VALUES (?, ?, ?)",
            (r["candidate"], r["votes"], r["percentage"])
        )
    conn.commit()
Enter fullscreen mode Exit fullscreen mode

The Polling Loop

def run_tracker(url, interval=30):
    conn = init_db()
    print("Election tracker started...")

    while True:
        try:
            results = scrape_results(url)
            store_results(conn, results)
            print(f"Stored {len(results)} candidates at {time.strftime('%H:%M:%S')}")

            # Check for lead changes
            if results:
                leader = max(results, key=lambda x: x["votes"])
                print(f"  Current leader: {leader['candidate']} ({leader['percentage']}%)")
        except Exception as e:
            print(f"Error: {e}")

        time.sleep(interval)
Enter fullscreen mode Exit fullscreen mode

Adding WebSocket Updates

For a live dashboard, broadcast updates via WebSocket:

import asyncio
import websockets

connected_clients = set()

async def broadcast(data):
    if connected_clients:
        message = json.dumps(data)
        await asyncio.gather(
            *[client.send(message) for client in connected_clients]
        )

async def handler(websocket):
    connected_clients.add(websocket)
    try:
        async for _ in websocket:
            pass
    finally:
        connected_clients.discard(websocket)
Enter fullscreen mode Exit fullscreen mode

Handling Anti-Bot Protections

Official election sites often use Cloudflare or similar protections. ScraperAPI handles these automatically, but you can also use ThorData for residential proxies when you need geo-specific IPs.

Rate Limiting and Ethics

Election data is public interest information, but be responsible:

  • Poll no more than once every 30 seconds
  • Respect robots.txt directives
  • Cache results to minimize requests
  • Use official APIs when available (many states provide them)

Extending the Tracker

Once you have the core working, consider:

  • Historical comparison — overlay past election results
  • Prediction models — use early returns to project outcomes
  • Alert system — notify when leads change or turnout spikes
  • Multi-source aggregation — combine AP, Reuters, and state feeds

Conclusion

Building an election tracker teaches you real-time scraping, data persistence, and live updates — skills that transfer to any monitoring application. The key is reliable data extraction, which tools like ScraperAPI and ScrapeOps make significantly easier.

Full source code is available on GitHub. Happy tracking!

Top comments (0)