10 Myths About Docker That Stop Developers Cold
Derick Bailey Feb 10, 2017
I was discussing the growth of Docker and I kept hearing bits of information that didn’t quite seem right in my mind.
“Docker is just inherently more enterprise”
“it’s only tentatively working on OSx, barely on Windows”
“I’m not confident I can get it running locally without a bunch of hassle”
… and more
There are tiny bits of truth in these statements (see #3 and #5, below, for example), but tiny bits of truth often make it easy to overlook what isn’t true, or is no longer true.
And with articles that do nothing more than toss around jargon, require inordinate numbers of frameworks, and discuss how to manage 10thousand-billion requests per second with only 30thousand containers, automating 5thousand microservices hosted in 6hundred cloud based server instances…
Well, it’s easy to see why Docker has a grand mythology surrounding it.
It’s unfortunate that the myths and misinformation persist, though. They rarely do more than stop developers from trying Docker.
So, let’s look at the most common myths – some that I’ve seen, and some I’ve previously believed – and try to find the truth in them, as well as solutions if there are any to be found.
Myth #10: I can’t develop with Docker…
because I can’t edit the Dockerfile
As a developer, I have specific needs for tools and environment configuration, when working. I’ve also been told (rightfully so) that I can’t edit the production Dockerfile to add the things I need.
The production Docker image should be configured for production purposes, only.
So, how do I handle my development needs, with Docker? If I can’t edit the Dockerfile to add my tools and configuration, how am I supposed to develop apps in Docker, at all?
I could copy & paste the production Dockerfile into my own, and then modify that file for my needs. But, we all know that duplication is the root of all evil. And, we all know that duplication is the root of all evil. Because duplication is the root of all evil.
Rather than duplicating the Dockerfile and potentially causing more problems, a better solution is to use the Docker model of building images from images.
I’m already building my production application image from a base like “node:6”. So, why not create a “dev.dockerfile” and have it build from my application’s production image as its base?
FROM node:6 # ... production configuration
$ docker build -t myapp .
FROM myapp # ... development configuration
$ docker build -t myapp:dev -f dev.dockerfile .
Now I can modify the dev.dockerfile to suit my development needs, knowing that it will use the exact configuration from the production image.
Want to See a Dev Image in Action?
Myth #9: I can’t see anything in this container
because I can’t see into my container, at all!
Docker is application virtualization (containerization), not a full virtual machine to be used for general computing purposes.
But a developer often needs to treat a container as if it were a virtual machine.
I need to get logs (beyond the simple console output of my app), examine debug output, and ensure all of my needs are being met by the file and system changes I’ve put into the container.
If a container isn’t a virtual machine, though, how do I know what’s going on? How do I see the files, the environment variables, and the other bits that I need, inside the container?
While a Docker container may not technically be a full virtual machine, it does run a Linux distribution under the hood.
Yes, this distribution may be a slimmed down, minimal distribution such as Alpine Linux, but it will still have basic shell access among other things. And having a Linux distribution as the base of a container gives me options for diving into the container.
There are two basic methods of doing this, depending on the circumstances.
Method 1: Shell Into A Running Container
If I have a container up and running already, I can use the “docker exec” command to enter that container, with full shell access.
$ docker exec -it mycontainer /bin/sh
Once I’ve done this, I’ll be inside the container as if I were shelled into any Linux distribution.
Method 2: Run A Shell as the Container’s Command
If I don’t have a container up and running – and can’t get one running – I can run a new container from an image, with the Linux shell as the command to start.
$ docker run -it myapp /bin/sh
Now I have a new container that runs with a shell, allowing me to look around, easily.
Want to See a Shell in Action?
Myth #8: I have to code inside the Docker container?
and I can’t use my favorite editor?!
When I first looked at a Docker container, running my Node.js code, I was excited about the possibilities.
But that excitement quickly diminished as I wondered how I was supposed to move edited code into the container, after building an image.
Was I supposed to re-build the image every time? That would be painfully slow… and not really an option.
Ok, should I shell into the container to edit the code with vim?
But, if I wanted to use a better IDE / editor, I wouldn’t be able to. I’d have to use something like vim all the time (and not my preferred version of vim).
If I only have command-line / shell access to my container, how can I use my favorite editor?
Docker allows me to mount a folder from my host system into a target container, using the “volume mount” options.
$ docker run -v /dev/my-app:/var/app myapp
With this, the container’s “/var/app” folder will point to the local “/dev/my-app” folder. Editing code in “/dev/my-app” – with my favorite editor, of course – will change the code that the container sees and uses.
Want to See Editing in a Mounted Volume, in Action?
Myth #7: I have to use a command-line debugger…
and I vastly prefer my IDE’s debugger
With the ability to edit code and have it reflected in a container, plus the ability to shell into a container, debugging code is only a step away.
I only need to run the debugger in the container, after editing the code in question, right?
While this is certainly true – I can use the command-line debugger of my programming language from inside a Docker container – it is not the only option.
How is it possible, then, to use the debugger from my favorite IDE / editor, with code in a container?
The short answer is “remote debugging”.
The long answer, however, is very dependent on which language and runtime is used for development.
With Node.js, for example, I can do remote debugging over a TCP/IP port (5858). To debug through a Docker container, then, I only need to expose that port from my Docker image (the “dev.dockerfile” image, of course).
# ... EXPOSE 5858 # ...
With this port exposed, I can shell into the container and use any of the typical methods of starting the Node.js debugging service before attaching my favorite debugger.
Want to See Visual Studio Code Debug a Node.js Container?
Myth #6: I have to “docker run” every time
and I can’t remember all those “docker run” options…
There is no question that Docker has an enormous number of command-line options. Looking through the Docker help pages can be like reading an ancient tome of mythology from an extinct civilization.
When it comes time to “run” a container, then, it’s no surprise that I’m often confused or downright frustrated, never getting the options right the first time.
What’s more, every call to “docker run” creates a new container instance from an image.
If I need a new container, this is great.
If, however, I want to run a container that I had previously created, I’m not going to like the result of “docker run”… which is yet another new container instance.
I don’t need to “docker run” a new container every time I need one.
Instead, I can “stop” and “start” the container in question.
Doing this allows my container to be stopped and started, as expected.
This also persists the state of the container between runs, meaning I will be able to restart a container where it left off. If I’ve modified any files in the container, those changes will be intact when the container is started again.
Want to See Start and Stop in Action?
If you’re new to the idea, however, I recommend watching the episode on basic image and container management, which covers stopping and re-starting a single container instance.
Myth #5: Docker hardly works on macOS and Windows
and I use a Mac / Windows
Until a few months ago, this was largely true.
In the past, Docker on Mac and Windows required the use of a full virtual machine with a “docker-machine” utility and a layer of additional software proxying the work into / out of the vm.
It worked… but it introduced a tremendous amount of overhead while limiting (or excluding) certain features.
Fortunately, Docker understands the need to support more than just Linux for a host operating system.
In the second half of 2016, Docker released the official Docker for Mac and Docker for Windows software packages.
This made it incredibly simple to install and use Docker on both of these operating systems. With regular updates, the features and functionality are nearly at parity with the Linux variant, as well. There’s hardly a difference anymore, and I can’t remember the last time I needed an option or feature that was not available in these versions.
Want to Install Docker for Mac or Windows?
WatchMeCode has free installation episodes for both (as well as Ubuntu Linux!)
Myth #4: Docker is command-line only
and I am significantly more efficient with visual tools
With it’s birthplace in Linux, it’s no surprise that Docker prefers command-line tooling.
The abundance of commands and options, however, can be overwhelming. And for a developer that does not spend a regular amount of time in a console / terminal window, this can be a source of frustration and lost productivity.
As the community around Docker grows, there are more and more tools that fit the preferences of more and more developers – including visual tools.
Docker for Mac and Windows include basic integration with Kitematic, for example – a GUI for managing Docker images and containers, on my machine.
With Kitematic, it’s easy to search for images in Docker repositories, create containers and manage the various options of my installed and running containers.
Want to See Kitematic in Action?
Myth #3: I can’t run my database in a container.
It won’t scale properly… and I’ll lose my data!
Containers are meant to be ephemeral – they should be destroyed and re-created as needed, without a moment’s hesitation. But if I’m storing data from a database in my container, deleting the container will delete my data.
Furthermore, database systems have very specific methods in which they can scale – both up (larger server) and out (more servers).
Docker, it seems, is specialized in scaling out – creating more instances of things, when more processing power is required. While most database systems, on the other hand, require specific and specialized configuration and maintenance to scale out.
So… yes… it’s true. It’s not a good idea to run a production database in a Docker container.
However, my first real success with Docker was with a database.
Oracle, to be specific.
I had tried and failed to install Oracle into a virtual machine, for my development needs. I spent nearly 2 weeks (off and on) working on it, and never even came close.
Within 30 minutes of learning that there is an Oracle XE image for Docker, however, I had Oracle up and running and working.
In my development environment.
Docker may not be great for running a database in a production environment, but it works wonders for development.
I’ve been running MongoDB, MySQL, Oracle, Redis and other data / persistence systems for quite some time now, and I couldn’t be happier about it.
And, when it comes to the “ephemeral” nature of a Docker container? Volume mounts.
Like the code editing myth, a volume mount provides a convenient way of storing data on my local system and using it in a container.
Now I can destroy a container and re-create it, as needed, knowing I’ll pick up right where I left off.
Myth #2: I can’t use Docker on my project
because Docker is all-or-nothing
When I first looked at Docker, I thought this was true – you either develop, debug, deploy and “devops” everything with Docker (and two-hundreds extra tools and frameworks, to make it all work automagically), or you don’t Docker at all.
My experience with installing and running a database, as my first success with Docker, showed me otherwise.
Any tool or technology that demands all-or-nothing should be re-evaluated with an extreme microscope. It’s rare (beyond rare) that this is true. And when it is, it may not be something into which time and money should be invested.
Docker, like most development tools, can be added piece by piece.
Run a development database in a container.
Then build a single library inside a docker container and learn how it works.
Build the next microservice – the one that only needs a few lines of code – in a container, after that.
Move on to a larger project with multiple team members actively developing within it, from there.
There is no need to go all-or-nothing.
Myth #1: I won’t benefit from Docker… At all…
because Docker is “enterprise”, and “devops”
This was the single largest mental hurdle I had to remove, when I first looked at Docker.
Docker, in my mind, was this grand thing that only the most advanced of teams with scalability concerns that I would never see, had to deal with.
It’s no surprise that I thought this way, either.
When I look around at all the buzz and hype in the blog world and conference talks, I see nothing but “How Big-Name-Company Automated 10,000,000 Microservices with Docker, Kubernetes, and Shiny-New-Netflix-Scale-Toolset”.
Docker may excel at “enterprise” and “devops”, but the average, everyday developer – like you and I – can take advantage of what Docker has to offer.
Give docker a try.
Again, start small.
I run a single virtual machine with 12GB of RAM, to host 3 web projects for a single client. It’s a meager server, to say the least. But I’m looking at Docker – just plain old Docker, by itself – as a way to more effectively use that server.
I have a second client – with a total of 5 part time developers (covering a total of less than 1 full time person worth of hours, every week) that is already using Docker to automate their build and deployment process.
I build most of my open source libraries for Node.js apps, with Docker, at this point.
I am finding new and better ways to manage the software and services that I need to install on my laptop, using Docker, every day.
And remember …
Don’t Buy The Hype or Believe The Myths
The mythology around Docker exists for good reason.
It has, historically, been difficult to play with outside of Linux. And it is, to this day and moving forward, a tremendous benefit to enterprise and devops work.
But the mythology, unfortunately, does little to help the developer that could benefit the most: You.
If you find yourself looking at this list of myths, truths and solutions, still saying, “Yeah, but …”, I ask you to take some time and re-evaluate what you think about Docker, and why.
If you still have questions or concerns about how a development environment can take advantage of Docker, get in touch. I’d love to hear your questions and see if there’s anything I can do to help.
And if you want to learn the basics of Docker or how to develop apps within it, but don’t know where to start, check out WatchMeCode’s Guide to Learning Docker (from the ground up) and the Guide to Building Node.js Apps in Docker.