As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
An API gateway is like the front door to a collection of microservices. It's the single point where all outside requests enter your system. I build them in Go because the language gives me the speed and control needed to handle thousands of requests without breaking a sweat. Let me show you how I put one together, piece by piece.
Think of the gateway as a traffic director. A client asks for something, like a user profile. Instead of the client needing to know exactly which server hosts that data, it just asks the gateway. My job is to take that request, figure out which backend service handles user profiles, forward the request, get the response, and send it back. This hides the complexity of the internal network.
Let's start with the core structure. I create a main APIGateway type that holds everything together. It has a router to direct traffic, a registry to know about my services, and slots for all the features I'll add, like rate limiting.
type APIGateway struct {
router *mux.Router
services *ServiceRegistry
rateLimiter *RateLimiter
circuit *CircuitBreaker
cache *ResponseCache
}
The first real job is routing. When a request comes in for /api/users/123, I need to know that this goes to the user-service. I use a library like gorilla/mux to define these paths. I don't hardcode them. Instead, I have a configuration where I register services and the URL patterns they own.
func (gw *APIGateway) RegisterService(service *Service) error {
gw.services.services[service.Name] = service
for _, route := range service.Routes {
gw.router.HandleFunc(route, gw.createHandler(service))
}
return nil
}
When I register a service, I tell the gateway: "Here's a service called user-service. It lives at http://localhost:8081 and it wants to handle any request that starts with /api/users." The gateway then creates a dedicated handler function for those routes.
The handler function is where the action happens. This function is called for every incoming request on that route. Its job is to apply rules, call the backend, and handle the response. I structure it as a series of steps, like a checklist.
First, I check if the client is sending too many requests too fast. This is rate limiting. I don't want one user or a broken client to overwhelm my backend services. I typically limit by the client's IP address.
clientIP := getClientIP(r)
if !gw.rateLimiter.Allow(clientIP) {
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
return
}
My rate limiter uses a "token bucket" algorithm. Imagine a bucket that holds tokens. Each request takes one token. Tokens refill slowly over time. If a client's bucket is empty, they have to wait. This allows for short bursts of traffic but enforces a steady average limit.
type RateLimiter struct {
limiters map[string]*rate.Limiter
mu sync.RWMutex
}
func (rl *RateLimiter) Allow(key string) bool {
rl.mu.Lock()
limiter, exists := rl.limiters[key]
if !exists {
limiter = rate.NewLimiter(rate.Limit(100), 150) // 100 req/sec, burst of 150
rl.limiters[key] = limiter
}
rl.mu.Unlock()
return limiter.Allow()
}
Next, I check the circuit breaker. If the user-service has been failing a lot recently, I don't want to keep hitting it with requests. It's probably down or struggling. The circuit breaker "opens" after too many failures and stops all traffic to that service for a short while. This gives it time to recover and prevents my gateway from wasting resources and making the user wait for a certain timeout.
if gw.circuit.IsOpen(service.Name) {
http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
return
}
The circuit breaker keeps a simple count. If failures for a service reach a threshold—say, 5 failures in a row—it opens. After a cooldown period, it lets one request through as a test. If that succeeds, it closes the circuit and lets traffic flow normally again.
func (cb *CircuitBreaker) RecordFailure(service string) {
cb.mu.Lock()
defer cb.mu.Unlock()
cb.failures[service]++
if cb.failures[service] >= cb.threshold {
cb.opened[service] = time.Now() // Open the circuit
}
}
Before I even call a backend, I check the cache. For GET requests, the response might not have changed. If a user asks for product details twice in a minute, I can just send back the first answer I stored. This is incredibly fast and takes load off the backend servers.
if r.Method == http.MethodGet {
if cached := gw.cache.Get(r.URL.String()); cached != nil {
w.Header().Set("X-Cache", "HIT")
w.Write(cached.Data)
return
}
}
My cache is a simple map in memory, but I add expiration times to each entry. I also limit the total number of cached items. When the cache is full, I remove the oldest entry to make space.
func (rc *ResponseCache) Set(key string, data []byte, ttl time.Duration) {
rc.mu.Lock()
defer rc.mu.Unlock()
if len(rc.entries) >= rc.maxSize {
rc.evictOldest() // Remove the least recently used item
}
rc.entries[key] = &CacheEntry{
Data: data,
ExpiresAt: time.Now().Add(ttl),
}
}
If the request passes all these checks and isn't in the cache, it's time to call the backend service. This is called forwarding or proxying. I take the incoming request, copy its method, headers, and body, and send it to the service's URL.
I do this with a timeout. I never let a request wait forever. If the backend is slow, I cancel the request after my configured timeout—maybe 5 or 10 seconds—and return an error to the client. This is crucial for reliability.
ctx, cancel := context.WithTimeout(r.Context(), service.Timeout)
defer cancel()
backendReq, err := http.NewRequestWithContext(ctx, r.Method, backendURL, r.Body)
I also add retries. Sometimes a network hiccup causes a failure. If a request fails, I might try it one or two more times with a small delay between attempts. I only retry on certain types of errors, like network timeouts, not on "user not found" errors.
The real power comes from the middleware pipeline. Middleware are small functions that process a request before it reaches the final forwarding step or process the response after. They are like checkpoints on the road.
For example, an authentication middleware checks for a valid API key or JWT token in the request header. A logging middleware records every request for debugging. A transformation middleware might add a standard header to all outgoing requests to the backend.
I chain them together so they run in order. Each middleware function receives the request and the next function in the chain. It can decide to pass the request along, modify it, or stop and send a response right away.
func authenticationMiddleware(next http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
token := r.Header.Get("Authorization")
if token == "" {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return // Stop here, don't call 'next'
}
// Token is valid, proceed
next(w, r)
}
}
In my gateway, I apply a stack of these middlewares to every request. This keeps my core forwarding logic clean. Cross-cutting concerns like auth, logging, and metrics are handled separately.
Talking about metrics, I collect data on everything. How many requests per service? What's the response time? How many errors? I store these in simple counters and gauges. Every few seconds, I log them or send them to a monitoring system. This data tells me if a service is getting slow or if the error rate is climbing.
type GatewayMetrics struct {
requests map[string]uint64 // Count requests per service
latencies map[string]time.Duration // Track slowest request
errors map[string]uint64
mu sync.RWMutex
}
I also run background health checks. Every 30 seconds, my gateway sends a GET /health request to each registered backend service. If a service fails to respond with a success code, I mark it as unhealthy in my registry. I can then stop sending live traffic to it, or I can alert an operator. This is how the circuit breaker knows a service might be down.
Putting it all together, the main function sets up the world. I create the gateway, register my services, add my middleware stack, and start the server.
func main() {
gateway := NewAPIGateway()
// Add my middleware chain
gateway.Use(loggingMiddleware)
gateway.Use(authenticationMiddleware)
// Define my backend services
gateway.RegisterService(&Service{
Name: "user-service",
Endpoint: "http://localhost:8081",
Routes: []string{"/api/users/*"},
Timeout: 5 * time.Second,
})
http.ListenAndServe(":8080", gateway.router)
}
When you run this, you have a working, production-style API gateway. It listens on port 8080. Clients talk only to this port. The gateway knows how to find the user-service, the order-service, and the product-service. It protects them with rate limits, shields the system with circuit breakers, speeds up responses with a cache, and handles common tasks like authentication in one place.
The result is a system that is much easier to manage. You can change, scale, or replace a backend service without the clients ever knowing. You can add security or logging features in one spot instead of a dozen. And because it's written in Go, it handles high traffic with very little resource use, giving you a strong, reliable foundation for your microservices architecture.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)