DEV Community

Bradley Black
Bradley Black

Posted on • Updated on


Fixing Your Dependency Issues w/ Docker

Great software relies on other programs to do much of the heavy lifting. Utilizing node libraries, database management services, and APIs makes for apps that require interconnectivity between a number of very powerful components.

The modular nature of these services allows developers to build tech-stacks with great experimentation and speed. This web of dependencies becomes as critical to an application's success as the code itself. However, dependencies pose a challenge when a program is passed between different machines, with different versions of libraries, and different hardware, and different network settings, etc etc.

All these unique factors that comprise a coding environment can lead to programs that run smoothly in certain settings and crash and burn in others. Ultimately, there's no certainty to how a program's dependencies will perform outside of the initial coding environment. And when there are issues, the debugging process can be quite time-consuming and frustrating. One tool to combat this is known as containerization.

Getting to Know Containers

Containers are a means for lightweight software virtualization. Their popularity is an acknowledgement that dependency libraries are foundational building blocks of modern apps. Containers are airtight coding environments. They include code, dependencies, and all necessary configuration information to successfully launch a program. Since containers are self-contained, they're also super portable, and they offer a much needed solution to the complexity created by dependencies.

Docker Containers, Images, and Registries

Docker is one of the most popular container platforms. There are three keys to the Docker system:

  1. Containers- Containers are isolated live development environments. They are a virtualization of the host operating system. While similar to traditional virtual machines, containers utilize the host's operating system instead of launching complete, and heavy, isolated operating systems for every guest.

  2. Images- An image is a container in a static, read-only state. Images are easily transferable, and can be uploaded to a registry, stored locally, or shared with others in a development team. These files are multi-layered and comprise all the instructions required to launch a container.

  3. Registry- An archive of images. Docker offers a public library where programmers can publish their images and download the work of others.

Getting Started

Docker can be run directly from a terminal via the command line. There is also a stand-alone desktop application. Each gives programmers access to local and online libraries of images from which they can create new containers.

The docker daemon is the workhorse of the docker engine. Interactions with either the command line or desktop application kick the daemon into action. The daemon takes on the tasks of packaging containers into images, image manipulation, and image management. A restful API resides between the client and the daemon, as part of the daemon's responsibilities are uploading and pulling images from remote locations.

Once installed, running docker is as simple as launching any other program in a bash terminal. For example, if you're launching from a mac, you can create a Linux container by running the following command:

docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
Digest: sha256:b4f9e18267eb98998f6130342baacaeb9553f136142d40959a1b46d6401f0f2b
Status: Downloaded newer image for ubuntu:latest
Enter fullscreen mode Exit fullscreen mode

Welcome to the command line of your new Ubuntu container, which has been launched from a now locally stored Ubuntu image. Following in the terminal, you can see how this process begins with the daemon looking locally for the image you've requested. If it's not there, it will pull it from the docker registry. The container ID for this instance is listed in the command line. Running multiple ubuntu containers in multiple terminals is as simple as running the same command, but now they will launch from the image file stored locally to your machine. Each container will have a unique ID and a random name generated as a courtesy, as you can see using the list container command:

docker container ls --all
CONTAINER ID   IMAGE     COMMAND       CREATED          STATUS                        PORTS     NAMES
fb7481161193   ubuntu    "bash"        5 minutes ago    Exited (0) 12 seconds ago               sweet_northcutt
40fefcb85c40   ubuntu    "/bin/bash"   5 minutes ago    Exited (0) 5 minutes ago                hardcore_thompson
f37912e87219   ubuntu    "bash"        12 minutes ago   Exited (130) 12 minutes ago             wizardly_agnesi
2000232fa0cf   ubuntu    "bash"        15 minutes ago   Exited (0) 15 minutes ago               recursing_shtern
127e12f73185   ubuntu    "bash"        16 minutes ago   Exited (0) 15 minutes ago               frosty_torvalds
Enter fullscreen mode Exit fullscreen mode


The speed and ease at which a programmer can access and share fully realized development environments is why containerization has become a standard practice in professional development. Docker makes quick work of dependency management, and frees programmers from the tedium of resolving why a build works on one machine but not others.

Top comments (0)

We are hiring! Do you want to be our Senior Platform Engineer? Are you capable of chipping in across sysadmin, ops, and site reliability work, while supporting the open source stack that runs DEV and other communities?

This role might just be for you!

Apply now