🧠 Go-URL: Building a Lightweight, Scalable URL Shortener in Go
Sometimes small projects end up teaching you the biggest lessons.
Go-URL started as a simple weekend idea — “let’s build a basic URL shortener in Go.”
But as I started designing it, I realized it was a perfect opportunity to explore system design, caching, persistence, and scalability — all within a compact, real-world problem space.
🚀 The Idea
The goal was simple:
Build a fast, reliable URL shortener with:
- Clean architecture
- Stateless services
- High scalability
- Low latency
And do it all using technologies I love working with: Go, Redis, PostgreSQL, and Kubernetes.
⚙️ Tech Stack Overview
| Component | Purpose |
|---|---|
| Go (Golang) | Core backend logic, API server, concurrency handling |
| Redis | Fast caching for redirect lookups |
| PostgreSQL | Persistent storage for original and shortened URLs |
| Kubernetes | Deployment, load balancing, auto-scaling |
| Docker | Containerization for portability |
🧩 System Design
1. Stateless API Layer
The API is built using Go’s net/http package and structured around clean architecture principles — keeping business logic, repository layers, and handlers decoupled.
This design allows horizontal scaling — multiple instances can run behind a load balancer with no shared state.
2. Redis for Speed
URL redirection is read-heavy.
Instead of querying PostgreSQL for every hit, Redis caches the mappings:
short_url -> original_url
Whenever a new short URL is created, it’s stored in both Redis and PostgreSQL.
On lookups, if Redis misses, the record is fetched from the DB and re-cached — a classic read-through caching pattern.
3. PostgreSQL for Durability
Redis is blazing fast but volatile.
To ensure persistence, PostgreSQL stores all URL mappings permanently, complete with timestamps and access logs.
A background worker syncs write operations asynchronously to maintain API responsiveness.
4. Kubernetes for Scalability
I containerized the service with Docker and deployed it on Kubernetes.
K8s handles:
- Load balancing across pods
- Auto-scaling based on CPU/memory metrics
- Rolling updates without downtime
This setup ensures Go-URL can handle sudden traffic spikes smoothly.
🧠 Key Challenges and Learnings
Balancing cache and consistency
Designing the Redis sync flow without introducing stale data was tricky. A hybrid TTL-based strategy worked best.Async syncs without blocking requests
I built a simple worker queue in Go to offload DB writes, improving response times significantly.Observability
Added basic logging and metrics using Go’sexpvarand structured logs — it helped a lot during debugging and load testing.Scaling on Kubernetes
Watching pods auto-scale under simulated load was incredibly satisfying — a real-world validation of clean design.
💡 Performance Results
After load testing with hey and wrk, the results were impressive:
- Median response time: <10ms (with Redis warm)
- Sustained 10K+ RPS across multiple instances
- Zero downtime during redeployments via Kubernetes rolling updates
✨ What I Took Away
Go-URL reminded me that even the simplest systems can be architecturally deep if you think about them right.
It taught me:
- How clean architecture helps with maintainability.
- Why caching layers are crucial for scalability.
- How Kubernetes orchestration simplifies deployment complexity.
- And most importantly — how small design decisions impact performance at scale.
🔗 Check it Out
You can explore the project here:
👉 GitHub: Go-URL
If you’re into backend systems, Go, or distributed design — I’d love to hear how you would scale or extend this system further.
Top comments (1)
Wow! Your Go-URL project is impressive, clean architecture, blazing-fast Redis caching, durable PostgreSQL storage, and Kubernetes scalability. You turned a simple idea into a well-designed, production-ready system.Truly professional work, be proud!