DEV Community

ZeroTrust Architect
ZeroTrust Architect

Posted on • Originally published at cacheguard.com

Reverse Proxy Internals: Load Balancing Algorithms, Health Checks, and HA with VRRP

A reverse proxy's job is to route client requests to backend servers and return responses. The interesting technical decisions are around how it handles backend failures, how it distributes load, and how it stays available when it itself fails.

What is a Reverse Proxy?

Upstream health checks

Before routing a request to a backend, the proxy must know whether the backend is healthy. Health checks run on a configurable interval and mark backends as up or down.

Passive health checks: The proxy observes real responses. If a backend returns 5xx errors or times out above a threshold, it is marked down.

Active health checks (Apache mod_proxy_balancer):

<Proxy balancer://mycluster>
    BalancerMember http://backend1:8080 hcmethod=GET hcuri=/health hcinterval=5
    BalancerMember http://backend2:8080 hcmethod=GET hcuri=/health hcinterval=5
</Proxy>
Enter fullscreen mode Exit fullscreen mode

The proxy sends GET /health to each backend every 5 seconds. A 2xx response marks the backend healthy; anything else triggers a configurable retry before marking it down.

When a backend is marked down, new requests are routed to the remaining healthy backends. When it recovers, it re-enters rotation — optionally with a warmup period where it receives reduced traffic.

Load balancing algorithms

Round-robin: Requests distributed sequentially across backends. Simple, effective for stateless backends with uniform request cost.

<Proxy balancer://mycluster>
    BalancerMember http://backend1:8080
    BalancerMember http://backend2:8080
    ProxySet lbmethod=byrequests
</Proxy>
Enter fullscreen mode Exit fullscreen mode

Least connections: Route new requests to the backend with fewest active connections. Better for backends with variable request processing time.

ProxySet lbmethod=bybusyness
Enter fullscreen mode Exit fullscreen mode

IP hash: Route requests from the same client IP to the same backend consistently. Provides session affinity without a session cookie, but distributes unevenly when many clients share an IP (e.g., behind corporate NAT).

ProxySet lbmethod=byip
Enter fullscreen mode Exit fullscreen mode

Session persistence: the sticky session problem

Many web applications store session state in memory on the backend server. If the proxy routes a user's subsequent requests to a different backend, the session is not found and the user must re-authenticate.

The standard solution is a session cookie that encodes the backend assignment:

Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://mycluster>
    BalancerMember http://backend1:8080 route=backend1
    BalancerMember http://backend2:8080 route=backend2
    ProxySet stickysession=ROUTEID
</Proxy>
Enter fullscreen mode Exit fullscreen mode

The proxy stamps the response with a ROUTEID cookie encoding which backend served it. On subsequent requests, it reads the cookie and routes to the same backend.

Edge case: If the sticky backend goes down, the session cookie becomes invalid and the user hits a different backend — losing their session. The correct fix is server-side session sharing (Redis, database), not sticky sessions. Sticky sessions are a workaround for applications that cannot share state.

High availability: VRRP and virtual IP failover

The reverse proxy becomes a single point of failure. High availability requires at least two proxy instances sharing a virtual IP (VIP). VRRP (Virtual Router Redundancy Protocol, RFC 5798) implements this.

One instance is the MASTER, holding the VIP. The other is BACKUP. The MASTER sends VRRP advertisements every second (configurable). If the BACKUP stops receiving advertisements for (3 × advertisement_interval) + skew_time, it takes over the VIP.

# keepalived.conf on MASTER
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass secretkey
    }
    virtual_ipaddress {
        203.0.113.10/24      # VIP — clients connect here
    }
}
Enter fullscreen mode Exit fullscreen mode
# keepalived.conf on BACKUP
vrrp_instance VI_1 {
    state BACKUP
    priority 90              # Lower priority = stays backup
    ...
}
Enter fullscreen mode Exit fullscreen mode

Failover happens in 3 seconds (3 × 1s advertisement interval). Client connections to the VIP are disrupted during failover — TCP connections in flight are dropped and must be retried by the client. New connections after failover go to the BACKUP (now MASTER).

CacheGuard implements reverse proxy (Apache mod_proxy) with load balancing and native HA mode for multi-appliance deployments — VRRP-based failover is handled automatically through the web interface without manual keepalived configuration.

https://www.cacheguard.com/what-is-a-reverse-proxy/


Originally published on the CacheGuard Blog. CacheGuard is free and open source — GitHub.

Top comments (0)