DEV Community

Cover image for System Design 101: Building a Scalable URL Shortener (Bitly Style)
Frank Oge
Frank Oge

Posted on

System Design 101: Building a Scalable URL Shortener (Bitly Style)

A URL shortener seems like a weekend project: take a long URL, give it a random ID, and save it to a database. Easy, right?
​But what if you have 100 million requests per day? What if you need the redirection to happen in under 10 milliseconds? What if you run out of IDs?
​This is where "coding" ends and "system design" begins. Here is my blueprint for a production-grade URL shortener.
​1. The Core Requirement: Shortening Logic
​We need to map a long URL to a short string. We don't use a standard Hash (like MD5) because the output is too long. Instead, we use Base 62 Encoding.
​Characters: [a-z, A-Z, 0-9] = 62 characters.
​The Math: A 7-character string gives us 62^7 (about 3.5 trillion) unique IDs. That is more than enough for several decades of use.
​2. Generating Unique IDs
​How do we ensure two people don't get the same ID?
​Option A: Use an auto-incrementing SQL ID and convert it to Base 62.
​Option B (The Scalable Way): Use a Range Handler (or Ticket Server). We use a distributed service like ZooKeeper to hand out "ranges" of IDs to different application servers. Server A gets 1 to 1,000, Server B gets 1,001 to 2,000. This avoids database locks and prevents collisions.
​3. High-Performance Redirection
​The most frequent operation is a Read (redirecting).
When a user clicks bit.ly/xyz, the system looks up the long URL. To make this lightning-fast:
​Read-Through Cache: We store the most popular 20% of URLs in Redis.
​Database: We use NoSQL (MongoDB or Cassandra) or a highly indexed Postgres instance. Since we don't need complex relations, a Key-Value store is often faster for simple lookups.
​4. The API Flow
​POST /api/v1/shorten: Receives long URL -> Generates ID -> Stores in DB -> Returns short link.
​GET /{shortID}: Checks Redis -> If not found, checks DB -> Returns 301 Redirect to the long URL.
​5. Cleaning Up
​URLs shouldn't live forever. We add an expiration_date column. A background "Cleanup Service" runs during low-traffic hours to delete expired links and free up storage.
​Hi, I'm Frank Oge. I build high-performance software and write about the tech that powers it. If you enjoyed this, check out more of my work at frankoge.com

Top comments (0)