Yes, you read the title right, and it is what you think. If you've got minimal knowledge about Docker, you've certainly spelled out the famous Docker command docker-compose up
, toss it with a .yaml
file, grouping an Nginx image along with your application's image and you've got yourself a running server.
Of course, there would be some configurations left to do for the machine to start receiving requests from the outside world, but we'll focus more on how to get the engine running internally.
What we're up to?
Ok, so here's what we're up to, we've got a web application, and we want to deploy and serve the whole setup, hassle-free with one click (disclaimer: not including those to configure the setup 😄).
As you might already know, to be able to access the app from the outside, we need a "webserver", these can be any of (but not limited to) the following:
- Apache Server
- IIS
- Nginx (we'll choose this one)
I'm going to start by giving you a brief introduction about Nginx and what it can actually do.
What is Nginx
"Nginx is a web server that can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache." Igor Sysoev, the creator of the software, has publicly released it in 2004, and with its open-source license, you can certainly use it for free.
In the last paragraph, I introduced to this peaceful article a couple of new terminologies that you either have not heard about before or might have heard about them without knowing what they actually are. Let's discuss some of them.
Proxies
A Proxy is basically a server that requests on behalf of the client (you), in other words, the destination server (for instance, google.com) would receive the request not from you, but from the proxy server you're using.
I could think of multiple reasons why any network setup would benefit from this technology.
So what are the benefits of a Proxy?
- On the top of my head, a proxy would instantly hide your IP address, keeping you anonymous to the receiving end.
- A proxy could also cache the results of your requests, so instead of traveling all the way to the destination server, a proxy in the middle would go: "Hmmm, I think you've requested the weather 5 mins ago, here you go, it is still burning hot at 42°, no need to ask Google again."
Keep in mind, Nginx is not a proxy, read on...
Reverse Proxies
A Reverse Proxy on the other hand is the opposite of a proxy.
In case of a Proxy, the Server doesn't know the client, but in case of a Reverse Proxy, the client doesn't know the server, it is basically the other way around.
So what are the benefits of a Proxy?
- Load Balancing, which could be configured in multiple ways to ensure that running servers would distribute tasks in order to serve more users. In the previous example, Google had two servers, but only one responded, the other one could've been serving another user, or it might've been having a nap, it doesn't matter as long as the request has been served.
- Caching works the other way around too. Google's Reverse Proxy would go: "Hmmm, I think you've just asked about the weather, nope, still 42°." without even passing the request to the actual Google servers.
So the main catch here is that (for most of the scenarios) a Proxy is installed and utilized by the Client whereas a Reverse Proxy is installed and utilized by the Server.
Before we continue, I want to point out that Hussein Nasser made an awesome video explaining the differences between Proxies and Reverse Proxies in an intuitive way, be sure to quickly watch it for an even better grasp.
The final architecture we're after
After that brief intro, I think we are now aware that Nginx would be the server of choice, and what comes free with Nginx you ask? Yes, Load Balancing. So let me try to capture the architecture we'll be going after in the following diagram:
As you might have figured out already, the architecture contains basically 3 main components:
- The client (end-user)
- A Database (hosted somewhere we don't care about for our context)
- Our Backend Server:
- This will be running 3 docker containers,
- Each having an instance of our backend application
- Each app is accessible through an internal port number
- Each having an instance of our backend application
- We'll also be running an Nginx Server pointing at each of the 3 instances in an attempt to access them through a load-balanced behavior (we'll discuss multiple approaches) from outside the Linux machine.
- This will be running 3 docker containers,
In the upcoming article, I'll talk in detail about Nginx, and how to correctly configure it to our needs. See you on the other side of the world 🚀😄.
Top comments (1)
Great explaination!