DEV Community

Amir Keshavarz
Amir Keshavarz

Posted on • Originally published at Medium on

How to set up an NGINX reverse proxy cluster with a shared cache

What is NGINX?

NGINX is a Web Server/Load Balancer/Reverse Proxy which uses an event-driven model to handle incoming requests instead of those costly threads!

The rise of NGINX in recent years has made it the de facto standard for many who need performance and more.

What is the problem?!

Scaling edge reverse proxies can be a little tricky if not familiar. Problems appear when you need lots of storage to cache files. There is a limit to scale your hardware vertically and scaling vertically is not an elegant solution anyway.

As you may know, NGINX instances don’t play well together when using shared storage (Remember there will also be issues with latency and … ).

Since these are edge servers we won’t have the option to use Consistent Hashing in order to send the request to the server that will have the cache ready.


Alternatively, we can design a two-stage cluster recommended by NGINX folks themselves. Look at the image below to understand better.

Combining Load Balancers and Cache Servers in a two-stage NGINX cluster

In this architecture, we have two instances of NGINX in each server; one acting as a simple Load Balancer and another one as a Cache Server. So when a client sends an HTTP request it first hits a Load Balancer (It doesn’t matter which one), then the Load Balancer calculates a consistent hash based on a key (usually a mixture of URI and other variables). The Load Balancer will use the created hash to proxy the request to a single Cache Server every time and then the Cache Server proxies the request to the Origin Server if it’s a cache miss or serves it from the local storage if it’s a hit.

For the people who understand codes and config files better than a bunch of paragraphs, I have prepared this configuration of a cluster consisting of 3 servers.

Example Configuration:

You should modify the config file according to your cluster setup and run it in every edge server.


For a more detailed read on this topic, please refer to Shared Caches with NGINX Plus Cache Clusters, Part 1 and Shared Caches with NGINX Plus Cache Clusters, Part 2.

Top comments (0)