Welcome back to the series. In the last article we discussed multiple approaches to configure Nginx for web applications and clusters, now that we've got ourselves good knowledge about it, let's see how we can take it one step further by containerizing a configured Nginx instance pointing at our cluster of backend containers, and deploying the full setup with one tap.
Let's start with the Docker Compose file, I'll be using the published strm/helloworld-http
image as the web application for our project:
version: '3'
services:
app1:
image: strm/helloworld-http
container_name: hello_1
app2:
image: strm/helloworld-http
container_name: hello_2
app3:
image: strm/helloworld-http
container_name: hello_3
nginxserver:
build: ./Server
ports:
- "8080:8080"
container_name: server
On the file above, I created 3 containers from the Image strm/helloworld-http
, I also gave each one a friendly name so that we can access it easily from Nginx using Docker's own DNS lookup.
I also added the server container, pointing to its own Dockerfile location and having a name too.
Now, onto Nginx's Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
The whole point of the file is to copy the configuration automatically into the container upon building it, /etc/nginx
is the path where Nginx reads its configurations which would be copied from the local machine.
Finally, let's see how to .conf
file would look like:
http {
upstream apps{
server hello_1:80;
server hello_2:80;
server hello_3:80;
}
server {
listen 8080;
location / {
proxy_pass http://apps/;
}
}
}
events { }
Important Docker π³ Notes:
We used the container names we set earlier in the compose file to access the upstream servers, this is valid because Docker creates a DNS record automatically to point to the container's IP address using its name, that way, you don't have to keep track of the container's IP address.
Container Name | IP Address |
---|---|
hello_1 | 172.24.0.3 |
hello_2 | 172.24.0.4 |
hello_3 | 172.24.0.6 |
Another note to take is that we did not do any port mapping for our three apps, I skipped it for a purpose, here is why:
π Reason #1
Each container is already exposed internally on port 80, exposing the container means that we could access it from within the container's network, which is also the one automatically created when we build the docker-compose
file.
π Reason #2
Mapping a container's port means that we could access it from the host machine, aka localhost
, so technically, if we created our container this way: docker run -p 1111:80 strm/helloworld-http
we could access it through localhost:1111
.
This is all okay if you are accessing the container from the host machine, namely, your machine, the one that has Docker installed, but accessing the container from another container is a whole another story.
If you entered the bash terminal of the Linux container running Nginx and tried to ping localhost:1111
you'll get a nasty time-out error, because from within the container, localhost
or 127.0.0.1
means the container itself, not the host machine, so you'd want to figure out the host's IP Address that's accessible from inside Nginx's container in order to access the app on port 1111
.
We can skip all the hassle and just use the exposed port with Docker's DNS resolved hostname of the container.
If you've reached up to this point, I've got some good news for you, we are finally about to be run our cluster, let's group our files in a single directory:
|-- HelloNginx
| |-- docker-compose.yml
| |-- Server
| | |-- nginx.conf
| | |-- Dockerfile
... fire up your terminal, head to the root folder of the project /HelloNginx where the compose file exists, and run this command:
foo@bar:~$ docker-compose up
β’β’β’
... now head to Nginx Gateway URL: localhost:8080
(remember, we've set the port 8080
within the nginx.conf
file) and you'll be welcomed with the first server in the cluster, keep refreshing and as you'll see, after each refresh, the page loads the name of the container from within the cluster application responded.
You could take the full setup anywhere, maybe on the production machine too, just make sure to either copy your source code and build your images remotely on the machine, or you could just push your Docker images to a remote repository that can be accessible from the machine's Docker.
Thanks a lot for taking the time to read through this series, I enjoyed writing and researching about this topic so much and I hope you enjoyed it too. See you soon π.
Top comments (0)