DEV Community

Cover image for Docker Basics- How to connect containers using docker networks.
Osita Chibuike
Osita Chibuike

Posted on • Updated on

Docker Basics- How to connect containers using docker networks.

Introduction

So docker is a wonderful tool, easily extensible to replicate almost any environment across multiple setups, There's a lot of buzz words out there about docker and what its capable of, but in this session, we are going to review building decentralized architecture using docker and getting functional with it. A typical setup for this play would be separating two different modules of the same application so that they can communicate separately, a fun fact is that with docker running the show, they could both be connected to the same data source using docker networking.

Pre-requisites

So here's what we'll presume in this article.

For the sake of clarity, we are still going to define some docker concepts.

What is Docker

Docker is a technology-focused on building container-based architecture for improving the developer's workflow.

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows-based apps, containerized software will always run the same, regardless of the environment

Setting up Docker and Docker Compose

Docker being a widely used tool has a lot of resources related to getting started, without much ado, I'd highlight a few resource that can help you get started.

For the Linux developers in the house apart from the docs from docker's site, these resources ensure Debian based users get the gist easy and quickly.

  • For Linux users (Debian guys in particular (ubuntu, Debian, kali, etc)) Click here
  • For Windows Users, we know you guys use installations files a lot, so the docker docs provide good leverage click here
  • For Mac users, The documentation also did justice to this and here you go click here

After installing docker you'll need docker-compose, Docker for Mac and Docker for Windows already have this installed, so you are good to go, for the Linux users in the house, we have work to do.

  1. Run this command to download the latest version of docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/1.17.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose'
Enter fullscreen mode Exit fullscreen mode

if you have problems installing with curl click here

  1. Apply executable permissions to the binary
$ sudo chmod +x /usr/local/bin/docker-compose
Enter fullscreen mode Exit fullscreen mode
  1. Test your installation
$ docker-compose --version
$ docker-compose version 1.17.0, build 1719ceb
Enter fullscreen mode Exit fullscreen mode

Architecture of A Container

Containers are not as complex as they sound, turns out they are pretty simple concept and so is their architecture, a docker container is simply a service running on a setup

Your containers run on the docker architecture using the configuration in the DockerFile, the docker-compose.yml file or the image specified in the docker run command to set up your containers. These containers usually have exposed ports if they are to connect to each other.

Your containers are services on their own and can work off each other using resources from the other via the network setup in them, these networks are created the docker-compose file

Your docker file setup typically sets you up with an image, a profile based on which the container is created, To fully explain this we'll dockerize a node application.

Dockerizing a node app

For this simple set up we are going to set up docker on a web app, node-based to show the cool nature of docker.
The code of the project can be found in the reference repository.

Step 1 - Setting up the base app

So first we set up the express application

$ npm i -g express-generator #setup the express-generator
$ express # setup an express app
$ touch processes.json
Enter fullscreen mode Exit fullscreen mode

the processes.json setup is required to run pm2 which helps manage the processes of the server
, usually, we can run the application using the

$ node ./bin/www
Enter fullscreen mode Exit fullscreen mode

And essentially get the app running, but with pm2 we go a step further adding load balancing and basically preparing the app to scale.

Step 2 - Setting up the docker image

Next, we set up the docker image in the base file

#Step 1.
FROM node:6.11-wheezy

#Step 2
LABEL version="1.0"
LABEL description="This is our base docker image"
LABEL maintainer "mozart.osita@gmail.com"

#Step 3.
ENV appDir /var/www/app/current

#Step 4.
ENV NODE_ENV production

#Step 5..
# Set the work directory
RUN mkdir -p /var/www/app/current
WORKDIR ${appDir}

#Step 6
ADD package.json ./
RUN yarn install --production


RUN yarn global add forever pm2

ADD . /var/www/app/current

EXPOSE 4500

CMD ["pm2", "start", "processes.json", "--no-daemon"]


Enter fullscreen mode Exit fullscreen mode

In the setup, the first parts specify which images we are trying to build our image from.

#Step 1.
FROM node:6.11-wheezy
Enter fullscreen mode Exit fullscreen mode

Using the Label tags we specify other information about the image we are setting up,

#Step 2
LABEL version="1.0"
LABEL description="This is our base Docker image"
LABEL maintainer "mozart.osita@gmail.com"
Enter fullscreen mode Exit fullscreen mode

After this, we, set environmental variables, set our environment to production and set up the Work Directory on the server using the appDir variable

#Step 3.
ENV appDir /var/www/app/current

#Step 5.
ENV NODE_ENV production

#Step 6..
# Set the work directory
RUN mkdir -p /var/www/app/current
WORKDIR ${appDir}
Enter fullscreen mode Exit fullscreen mode

Next we set up package.json in the workdir and run the yarn install command with the production flag, we also add our other directories and file into the working directory.

#Step 7
ADD package.json ./
RUN yarn install --production


RUN yarn global add forever pm2

ADD /var/www/app/current
Enter fullscreen mode Exit fullscreen mode

After all these, we expose the 4500 port which would be used in connecting with the outside environment.

EXPOSE 4500

CMD ["pm2", "start", "processes.json", "--no-daemon"]
Enter fullscreen mode Exit fullscreen mode

The CMD command after that starts up the server with pm2, a node based process manager.

Step 3 - Build the image and deploy

After this, we build our image and set it up run it.

$ docker build -t <image name> . #please remember the .(dot)
Enter fullscreen mode Exit fullscreen mode

This runs a process to build your images, after this you can add your image to docker hub to be able to pull them from anywhere.
If you have a docker hub account, proceed to log in on your terminal

$ docker login --username=yourhubusername --email=youremail@company.com
Enter fullscreen mode Exit fullscreen mode

Next, you get the image id

$ docker images
Enter fullscreen mode Exit fullscreen mode

From the list, output get the id of your images and tag your image with the repositories name

$ docker tag bb38976d03cf yourhubusername/reponame:yourtag
Enter fullscreen mode Exit fullscreen mode

Next, you can push this to dockerhub

docker push yourhubusername/reponame
Enter fullscreen mode Exit fullscreen mode

After this running, your container is a breeze

$ docker run --rm -it -p bindport:exposedport <image-name>:latest
Enter fullscreen mode Exit fullscreen mode

A container is launched and set.

Connecting Containers

To connect our container with another container we can set this up using docker-compose, the fun part is we can run multiple containers and decentralized parts of the same application. To accomplish this we'll set up a docker-compose file and build the container from it, as a service, using the docker-compose setup we can set up multiple containers as services and link them via the container's name

Here's a sample docker-compose.yml file

version: '3'

services:
  appplication:
    image: mozartted/base-node:latest
    ports:
      - "4000:4500"
Enter fullscreen mode Exit fullscreen mode

But we can connect our container with another via the link tag, let's say we want to have our node service run alongside a MongoDB service.

So we update the docker-compose configuration file.

version: '3'

services:
  application:
    image: mozartted/base-node:latest
    ports:
      - "4000:4500"
    links:
      - mongo
  mongo:
      image: mongo:latest
      ports:
        - "27018:27017"
      volumes:
        - ./data:/data/db
Enter fullscreen mode Exit fullscreen mode

Using the links tag we connected the application container or service to the mongo service, and with the volume tag we set up a directory data in our project folder as the data volume of the mongo container, using the link in the application's configurations we can connect to the mongo service using the name mongo as the service's address and the exposed port 27017 as the port in the container.

But this method of connecting containers limits us to a project set therefore we cant connect containers across two different projects,
Using the network tags we can set up a network that we can use across different containers and project bases.

version: '3'

services:
  appplication:
    image: mozartted/base-node:latest
    ports:
      - "4000:4500"
    links:
      - mongo
    networks: 
      - backend
  mongo:
    image: mongo:latest
    ports:
      - "27018:27017"
    volumes:
      - ./data:/data/db
    networks: 
      - backend
networks:
  backend:
    driver: "bridge"

Enter fullscreen mode Exit fullscreen mode

With this setup the containers are connected to the backend network, therefore, external containers can also connect with the backend network to be able to access the services in it.
To get a list of the networks connected to the container simply run the command

$ docker network ls
Enter fullscreen mode Exit fullscreen mode

Using the network name you can connect external content to the network using the command

$ docker network connect <network_name> <container_name>
Enter fullscreen mode Exit fullscreen mode

to be able to view the containers with access to the network simply run the command

$ docker inspect <network_name>
Enter fullscreen mode Exit fullscreen mode

Looking to follow up on this process, you can find a sample repo of this setup here

Conclusion

With these, you can be able to set up containers for different projects and connect them making them use services found in the others, and with more configurations would be ready to pull off a microservice-based architecture in deployment, Docker is a really great tool and taking full advantage of what it offers is worthwhile.

Top comments (8)

Collapse
 
gsvolt profile image
gsvolt

Greetings Osita,

Reading this in 2019 ;) and just followed steps, and found that prior to the line below:

$ node ./bin/www

We need to issue this line:

$ npm install

Can you update when you get a chance?

Cheers!

@gsvolt

Collapse
 
andreanidouglas profile image
Douglas R Andreani

If I set a mariadb docker and link it to my node application, wherever a need to specify the ip for the Server I just add the name of the mariadb docket?

Collapse
 
mozartted profile image
Osita Chibuike

Yea...the docker setup automatically links it as the container's IP. In your instance the service name would do.. For instance mariadb could serve as the service name

Collapse
 
andreanidouglas profile image
Douglas R Andreani

I see. pretty powerful indeed

Collapse
 
daniel_nord profile image
Daniel Nord 

In docker-compose.yml you need to change from appplication to application (on the too much).

Have a nice evening.

Collapse
 
sorcererstone profile image
sorcererstone

Hello Osita,
In the above compose file, the "links" command was used. Isn't the "links" command was deprecated by Docker in 2017?

Collapse
 
onshop profile image
Ben Parish

Thank you. This post has been of enormous help.

Collapse
 
betht1220 profile image
Beth Tran

I have to spread my docker containers on to two machines (same network).
Can containers on one machine link to the containers on the other machine?
Thanks