DEV Community

Arpit Mohan
Arpit Mohan

Posted on • Originally published at

Who Ate Docker's Lunch?

Yesterday, Mirantis acquired Docker Enterprise which includes the registry, the enterprise accounts and basically everything of value owned by Docker Inc. The company is now left with a shell of its former business. Even though the sale amount is not public, it is widely understood to not be a large sum.

Docker was once a darling of the tech world. Today we are left wondering - Who ate their lunch?

What did Docker do well?

1. Remarkable developer UX

Solomon Hykes & co. took an old not-so-well-known technology "Linux Containers (LXC)" and created a beautiful developer experience around it.
It was like old wine (LXC) in a new bottle. It allowed developers to leverage the possibility of creating re-usable, re-deployable binaries and it was incredible. Once a container was built, you could run docker run on any Unix system and it would just work. This was the promise of Java jars in the past, just on a more generic and wider scale.

2. Faster REPL cycles

Creating a layered structure for the Docker container (akin to Git), was another masterstroke. A developer could re-use various pre-built layers in other builds. This reduced build time for incremental builds dramatically. In the developer world faster REPL cycles lead to faster adoption; ALWAYS. And it happened.

On the downside, this design created bloated Docker images. There were multiple hacks introduced to counter this force. But it remains one of the biggest challenges of the container world.

3. Run Anything, Anywhere

For better or for worse, most developer machines are not replicas of their production environment. For example, while I code on a Macbook, our production environment is a cluster of Debian machines. If you work in an enterprise, you might even be required to use Windows as your primary dev environment.

This disparity creates a whole new set of headaches. It's hard to develop & debug for a system that you are not very well versed with.

Allowing developers to run an OS inside another OS was a huge accomplishment.

4. Rise of "Devops"

The meaning of the word "Devops" is highly contentious. It means different things to different people. Docker was singularly responsible for getting developers to stop throwing code over the wall to sysadmins who then had to run & maintain the code in production. This led to the creation of a hybrid team where the dev & ops folks could work closely with each other. This could only happen by making ops more approachable to the devs (and vice-versa).

As a dev, if I could run a Docker container on my local machine and be promised that it would behave in the same manner in production, it gives me a lot more confidence in my abilities to troubleshoot production issues.

Where did Docker go wrong?

Most developer tool companies (IntelliJ, Terraform, etc) start out with a popular product that keeps them top-of-mind for developers. But it's hard to monetize & build a long-lasting company based on a single product. As a company, you need to build 2nd & 3rd tier products that ride on the popularity of the first one. This suite of products then come together and create a reckoning force.

Take for instance the successful developer tools company Hashicorp. Their first product that became popular was Terraform, a multi-cloud provisioning system. You could write a simple config file and provision computers across any cloud provider. They capitalized on its popularity and created a suite of products such as Consul, Vault, etc, each with enterprise plans in mind. This allowed enterprise teams to collaborate, cluster & monitor their production systems.

Docker, on the other hand, wasn't able to create a successful 2nd tier product. If you look at their website, product offerings are limited. Docker Hub was required but not enough to sustain the company. Docker Swarm (which could have been their consolidation) was an inferior technology as compared to Kubernetes - the big daddy of orchestration today.

While the initial promise of "Build once, run anywhere" is great,
managing production environments is a whole different beast to handle.
Running clusters of machines, security management, network partitions, redundancies at all levels is what keeps sysadmins constantly on their toes. The experience of using Swarm in production is less than ideal. It just doesn't live up to the requirements.

In this sphere, Kubernetes did a much better job (even though their dev UX sucks) at running production workloads with little hassle.

Observability products such as Prometheus, New Relic, etc capitalized on Docker containers being harder to monitor because they were isolated binaries. Another missed opportunity for Docker Inc.

Being able to expose monitoring data out-of-the-box could have been a huge win. It could have also ensured that as a developer, I was tied into the ecosystem.

All of these missed opportunities are hard problems to solve. They aren't solved overnight. But Docker had some time to solve this. It was the highly valued darling of the tech world, after all. At its peak, Docker had investors willing to invest in its future and developers dying to work for the company.

Docker Inc did introduce consultancy services for enterprises. But the revenue from it was considered service revenues. And service revenue isn't as highly regarded as product revenue because repeatability & scalability factors aren't high in services.

Docker was great at building its technology but the fact remains that it always struggled hard with monetization. There is a lot to learn from Docker's pioneering vision as well as from its market struggles.

I wish to see the technology thrive and I'm optimistic that Mirantis will do justice to Docker Inc.

Top comments (6)

downey profile image
Tim Downey • Edited

This is all based on my first hand experiences and things I've pieced together over the years so take it was a grain of salt. But anyway, here's my thoughts on why.

Part of the problem is that Linux containers have become commoditized. Linux containers rely on a set of primitives provided by the Linux kernel that anyone can make use of. Things like namespaces, cgroups, seccomp, etc. Projects like LXC for manipulating these constructs existed before Docker and it's only gotten easier to work with since.

As this post mentions, Docker's primary contribution was its excellent UX that brought Linux containers to the masses. The Dockerfile was an incredible abstraction when it was first introduced and the Docker container "image" format was very convenient.

However, other OSS projects and container standards emerged over time. Other containerization projects like rkt were developed, and this put pressure on Docker. They eventually worked to ensure that the Docker image format became a standard and the Open Container Initiative was formed. You can read more about that history here.

This ensured the continued relevance of Docker and was great for the developer community at large. For example, Cloud Foundry, a project that I've been a part of, had independently developed its own containerization engine back in 2011 -- prior to the rise of Docker. Once the dust had settled on OCI, we soon migrated to this standard, and so did many others.

At this point folks had an easy way of packaging up software. However the actual deployment and orchestration of these containers was a problem that remained unsolved. Docker tried to pivot toward solving this with Docker Swarm and intended on making money off of their enterprise offering.

Unfortunately, Kubernetes showed up. Kubernetes was free, open source, and worked great with Docker (OCI) images out of the box. Since OCI is a standard, if Docker did anything to make itself work less seamlessly Kubernetes, it could be swapped out with another compatible container runtime. I think they just had a hard time competing with that. 😔

mohanarpit profile image
Arpit Mohan

This is an amazing comment. Thanks!

I agree that in the end, monetizing an open source product and an open standard is really hard. While it helps build an entire industry and many other businesses rely on them, just surviving is a struggle. Unfortunately, Docker (like Netscape) will go down into the annals of history of giving us great tech but not thriving as a great company.

mohanarpit profile image
Arpit Mohan

There are other container runtimes such as Rkt, but they aren't as widely adopted. The entire industry has rallied around Docker itself.

The technology is kick-ass, no doubt. But good technologies need not always translate into good businesses. All developers, engineers and hard-core techies learn this lesson the hard way. See Netscape & AMD as other examples in the same vein.

downey profile image
Tim Downey

Yep, Docker images are OCI-compatible images. These days there's actually a whole bunch of projects that can build these images. The images themselves are actually just glorified .tar files so it's actually nothing really too special. This blog post goes into detail how they work today (and what they may look like in the future):

So really today what Docker adds is the Dockerfile/docker CLI UX and their container runtime. But even the runtime aspects have become commoditized with projects like containerd. (This post explains more the relationship between containerd and Docker).

It's really an interesting ecosystem and a great time to be a dev. :) It is just unfortunate smaller companies like Docker have had trouble monetizing it.

sonnk profile image
Nguyen Kim Son

Great write up Arpit! Another thing I felt is Docker is stagnating for the last 2 years: no meaningful features added, lots of major bugs remain unfixed. Along with a unclear direction of what they want to focus next. Hope the team left learns from the lesson and continue making Docker a developers chéri.

mohanarpit profile image
Arpit Mohan

I agree. I hope in the next few years, we see sustained innovation and development from the Docker dev team.