DEV Community

Eric Smalling for Snyk

Posted on • Originally published at snyk.io on

The Docker project turns 10! Looking back at a decade of containers

The Docker project turns 10! Looking back at a decade of containers

March 15, 2023 marked the 10-year anniversary of Solomon Hyke’s famous PyCon lightning talk, when he introduced the world to Docker.

Let’s look back at how much has changed and hear from some folks who have stories about blazing the trail toward the containerized world we live in today. I opened up my little black book of container and cloud-native contacts and asked them to share stories about their introduction to Docker, and any interesting tales from the trenches over the decade since we first met Moby and Molly.

Hello wowrld!


Solomon Hykes speaking at PyCon 2013

In 2013, the world was introduced to Docker. For many developers, it was our first experience with the concepts of c-groups, namespaces, and other Linux technologies used to “contain” processes. As the Docker folks are fond of saying, they “democratized” container technology, making it simple to use without a degree in Linux systems administration.

Mind-blowing and Magical

Some adjectives seemed to come up repeatedly when I asked people about their first experiences with Docker; words like magical, mind-blowing, and “ah-ha moment.”


Nirmal Mehta speaking at DockerCon 2015

Picture it! Portland Oregon, 2013, at the Oreilly OpenSource Conference. This was a few months after Docker open source came to life, and it was gaining popularity already within various IT communities: open source, cloud, devops etc. It was the last day of the conference and I attended a Friday early morning session about this new thing called Docker! Solomon Hykes was the speaker and he dove right into the container layer concepts, explaining the capabilities and underlying technologies. He then finished his presentation with a demo, where, if my memory serves me, he proceeded to spin up 5, 10, 15, 20! apache/httpd containers within seconds! Even to the small, bleary eyed crowd that morning and especially to me, it was mind blowing … I instantly had the feeling that I was witnessing the start of a paradigm shift and I wanted to know everything about it!

I know Docker wasn’t the first to pull the underlying technologies together to create isolated processes/containers, but the user experience demonstrated that day — and carried through to today — was magical.

- Nirmal Mehta, Principal Specialist SA at AWS

Brandon Mitchell bundled up with Docker swag

I remember using Docker the first time, after going through the process to setup a VM to run nginx with tools like Ansible and Chef. I’d spend an hour coming up with a playbook, configure it for the VM that was spun up, and then wait for it to run the install. Then I ran the Docker command to start nginx, and less than 30 seconds later, it had started the process and run it in this magical isolated environment. That was my hook that started the adventure to discover what just happened and how Docker works.

- Brandon Mitchell, Solutions Architect at BoxBoat, an IBM Company

Discovering Docker was like discovering magical powers. This had happened when virtualization took the place of servers, giving me the power to pack VMs into physical infrastructure and get the most use out of it. Docker was like descending into another dream level — now I could pack applications into VMs and get even better use out of the hardware.

- Adrian Goins, Retired Developer Advocate at Rancher/SUSE


Andy Clemenko, early Docker adopter and field engineer

In early 2015 I was asked by my employer to become a “Docker” expert. Knowing very little, other than a few Orange site posts, I started playing with it. As with everyone, I started with the classic Docker hello world example of Nginx. For me the “ah ha” moment was instant. Having been a long time system admin, I saw the process isolation as a massive leap forward. From that day forward I shifted my entire career focus towards containers. I was lucky to help the US Government start adopting containers across quite a number of agencies. After a year and a half I was fortunate to be able to continue the Docker journey by joining the company.

- Andy Clemenko, Field Engineer at Rancher Government Solutions

Doomsday scenario

David Flanagan, a year-one Docker adopter, quickly saw how it could revolutionize deployments at his company.


David Flanagan, Founder Rawkode Academy and fun dad

I first heard of Docker from Solomon’s PyCon demo in 2013. At the time I was working for a UK radio and magazine company as the Director of Development and trying to help them bring their business into the 21st century… digital transformation, right? Their biggest problem was scale, and we actually had a “doomsday” scenario of “How do we handle Lemmy from Motorhead dying?”. Our load was extremely predictable, until it wasn’t. You can’t predict the news and you have to scale in real-time as quickly as possible.

We had used Vagrant and VMs for a while, but scaling those quickly is painful. We had to over provision early and dial back fast when the “event” was over in-order to reduce costs. So when we saw Docker it was a light bulb moment. Except… Docker didn’t have “docker build” back then… However, it followed shortly after and they adopted the tag line still used today, “Build. Ship. Run”. They didn’t just solve the runtime problem, they made the DX of building container images incredibly simple AND provided distribution (ship). Kubernetes and Cloud Native owes the early dotCloud team a huge debt of gratitude, regardless of which container runtime we may be running today.

- David Flanagan, Founder, Rawkode Academy

Stabilizing the grid

I, too, have been using Docker since the very early days and — if I may be so bold — here’s how it went for me.


James Spurin’s twitter screenshot: himself, Eric Smalling, Bret Fisher, Chad Crowell, Kunal Kushwaha and Ramesh Kumar at Kubecon 2022

I started playing with Docker in late 2013, using it in our CI pipelines where we were tasked with running hundreds of functional end-to-end tests against every commit of a large e-commerce web application. We often were running upwards of 2000 browser instances via VM-based *Selenium2 Grids** during the peak hours of the day, and the bane of our existence were flaky test results due to grid stability issues. We were constantly spinning up and down instances in the cloud and tried multiple different strategies over the years to improve start-up times, maintain stability, and minimize costs. We even dabbled with spot instances and got into bidding wars in various regions!*

We had already been experimenting with using Docker for running the web app and test suite controllers, but the lightbulb moment for me was when I realized that I could build an image with the browser and xvfb in it and start it instantly. This allowed us to run stable Selenium grids with as many browsers as we could fit onto a single node! (Our grid stability issues usually stemmed from the node’s inability to scale up many browsers per node, thus requiring many small nodes with small numbers of browsers each.) This was so successful that we largely were able to abandon the cloud-based instances and use just a few local VMs for these grids, saving thousands of dollars per week.

- Eric Smalling, Sr. Developer Advocate at Snyk

Convincing the masses

To anyone new to enterprise software development, the advantages of containers and the ecosystem that evolved around them might seem a forgone conclusion, but for the first 5 or 6 years, Docker’s future was far from certain.

Results speak for themselves

Change is often hard for humans, but Docker’s demonstrable benefits sold it easily:

I had a history of bringing in new shiny tech, so there was always an air of frustration when “Dave’s got another toy,” and the majority of my team were Mac users, so they definitely hated me!

But Docker was mostly server-side for us, there wasn’t boot2docker, yet so the team continued with Vagrant for dev, and we brought it into that environment eventually when the story kind of told itself. The team saw how it simplified our deployment pipeline, and it won them over.

It was hard to argue against when our deploys went from 40 minutes to about approximately 3!

- David Flanagan


Sevi Karakula making shift happen!

When I first encountered Docker, I had been working as a software developer long enough to know how hard it can be to get a new member onboarded to the team with all of the tools they needed. Also, whenever we had tried to introduce new frameworks and languages into the development environment — to make everything eventually a bit lighter — it was often a pain because the usual reaction from most of the developers would be an eye-roll.

Docker made our lives so much easier because it was just this one magical tool to teach the team, and then they’d the benefits immediately without having to deal with all the complexity to install and configure them. It was such a big moment of developer liberation, not having to install every tool on your tightly monitored machine, not having to chase requests to get approved and not having to think about licensing. If there is an image, you were free to go.

- Sevi Karakulak, Engineering Lead at Container Solutions

Not “enterprise ready”

The term “enterprise ready” is subjective, and can mean different things for every company you deal with. Matt Bentley recounts an interesting interpretation of when, as a Solutions Engineer for Docker, he had to address how “attractive” Docker was — or, in this case, wasn’t — in the early days.


Matt Bentley at DockerCon 2019

… the customer loved Docker as a technology, [and] they loved the Docker Trusted Registry product, but until we had something that wasn’t just API and command line based, their leadership would never see it as being enterprise ready.

They had a process where they needed to see the products demonstrated to them internally, and if they showed what I did with a CI/CD pipeline built in Jenkins to take code, build & test it in a container, deploy it, promote the image, they wouldn’t understand it because enterprise-ready solutions had pretty user interfaces.

- Matt Bentley, Manager, Solutions Engineering at VMware

A more common hesitancy heard often was about the relative immaturity of the project. Even though the underlying technologies had been around for many years, hitching your wagon to an open-source project spun out of a startup, with cartoon mascots and a pet turtle doing deployments, was a bridge too far for many.


Rachel Leeken and Eric Smalling at Kubecon EU 2022

I started learning about Docker in 2015 and based on the potential I saw, I proposed it might be worth looking into for modernization instead of an older, expensive vendor’s solution at the time. The customer didn’t go with it because they said there were not sure about container technology and if it would even be around for the 5 years of the contract.

- Rachel LeeKin, Containers Specialist SA at AWS

Self-inflicted wounds

Sometimes getting a team to adopt containers was not the hardest task; it was coaching them to do it right.


Adrian Goins and the Iron Condor, his paramotor quad

In the beginning, it was hard to convince my clients to do it… Those who did get the idea didn’t always get the benefits or how to leverage a container. I remember one client who had containers that they would shell into, where they’d recompile their application or its dependencies and then commit the container like it was a code repository. Their operational containers were huge and probably more fragile than their original non-container infrastructure.

- Adrian Goins

Interest grows

As the years went on, more and more people started to see the potential containers provided, especially as orchestrators matured — although they brought challenges of their own. Adrian Mouat (@adrianmouat | @adrianmouat@hachyderm.io), author and another OG Docker Captain, recalls the experience of the early DockerCon conferences and the flurry of activity as more and more people started to dip their toe in the water and companies started offering solutions to address their needs.


Adrian Mouat with Betty Junod, Eric Smalling, and Matt Jarvis at KubeCon NA 2022

One thing I really remember is how speakers (myself included) would ask, “Who’s using Docker?” in 2014. A forest of hands would go up. “Who’s using Docker in production?“ Pretty much all the hands would go down.

Having said that, the number of people using Docker in prod before it even went 1.0 (Oct 2014) and declared production ready was astounding.

We used to use shipping metaphors and talk about how it solved the “but it works on my machine problem” — unfortunately, k8s came along and recreated that one.

When I was at Container Solutions, we helped organize the first *Docker Con EU** at the NEMO Science Center. The buzz was pretty amazing; everyone knew they were at the start of something big. CoreOS were there (launching Rocket!), Alexis Richardson was pushing Weave networking, Luke Marsden was running ClusterHQ and the flocker data manager, Timo Derstappen was doing amazing stuff with Giant Swarm. Enterprise companies were everywhere just trying to figure out what the hell was going on.*
- Adrian Mouat, Product Manager at Chainguar


Bret Fisher wins “Tip of the Captains Hat” at DockerCon 18 Europe

I saw Solomon’s PyCon 2013 talk and tried docker to improve our Node.js testing in 2014, and I just didn’t get it. I kept trying to shove a whole server in a container image (and failing), and the networking was complete voodoo to me. It was such a huge stack of new automation magic and concepts that I gave up and returned six months later. This time, it clicked and hit me like a train. The 1–2–3 combo of building images, storing them in registries, and running containers from them blew my mind, and I never looked back.

- Bret Fisher, DevOps dude & creator of Docker Mastery

The orchestrators

If Docker’s announcement at PyCon 2013 is the spark that lit the fire, the advent of container orchestration platforms are the winds that ignited the forest. Mesosphere, Rancher, Swarm, Nomad, and Kubernetes were the “killer apps” that really pushed containers over the edge. Volumes have been written about the pros and cons of each platform, but it can’t be overstated how important they have been to the mass adoption of containers.

Kubernetes is the dominant platform today but there is still a very loyal community of Swarm users, and Nomad has its pockets of popularity too. Mesos predates Docker, and I’m sure it still has a fair number of users as well. For some, Kubernetes seemed overly complex but as its market share bears out, most people came around to adopting it.

When Kubernetes came along, I hated it. It was another layer of abstraction that didn’t provide a lot of additional benefit…until it did. Then I loved it because now I could take those servers, with their VMs, and their containers, and make a little mini datacenter out of it. The fact that a process existed to babysit the thing 24x7 meant that I was free to go do something else.

- Adrian Goins

While Kubernetes is now the de facto standard way to deploy containers, it is interesting to see the evolution of the space and how there are so many abstractions being created around it.

When it became clear that orchestrators were going to be key to container adoption at scale, I thought that there would be room for multiple major orchestrators to be successful. Each was good as it’s own thing and I saw each as having ideal use cases. The thing I loved about Swarm was that I could take someone who understood simple docker run commands or could put together an easy to understand Docker Compose file and get a Swarm service deployed in minutes. The simplicity of the docker CLI and tools like Docker Compose were brilliant because if someone knew how to run containers on a single host with either, it wasn’t a big lift to get them orchestrating containers. Now the industry is working to abstract Kubernetes away from developers because it took too much focus away from developers developing and put it on orchestration. Time spent by developers on orchestration means less focus on delivering business value through the apps they’re working on.

- Matt Bentley

Career-changing and life-changing


Moby, the Whale at Docker’s former SFO office

The common thread among everyone I reached out to for this article was how the Docker project and the community that has grown up around it have changed the careers and, indeed, livelihoods of so many people.

I knew it was the next evolution in infrastructure. I was obsessed and shifted my whole career to nothing but containers.

- Bret Fisher

From that day forward I shifted my entire career focus towards containers… To this day I am still on the mission of educating and leading the US Government on their container journey. It is amazing to see such a transformative technology last for 10 years.

- Andy Clemenko

The more I learned, the more I got hooked by the possibilities containerization offered. Eventually, I started teaching Docker and decided to make a pivot in my career towards containers and then Kubernetes. Looking back now, I can safely say that Docker changed my life.

- Sevi Karakulak

Snyk loves open source

We applaud the Docker project for a decade of transformational work, as evidenced by the powerful testimonies of those quoted here as well as the millions of developers who build, ship, and run in containers around the world every day.

Snyk was founded in 2015 with the goal of empowering developers to code and use open-source software securely — including the containers you put them in. Because we believe so strongly in the power and importance of the open-source development model, we have offered our scanning tools to individual developers free of charge from day one.

Keep your open source dependencies secure

Snyk provides one-click fix PRs for vulnerable open source dependencies and their transitive dependencies.

[Start free with Github](https://app.snyk.io/auth/auth0/github) | [Start free with Google](https://app.snyk.io/auth/auth0/google-oauth2)

In addition, for open-source project maintainers we offer expanded capabilities. See https://snyk.io/open-source-projects/ for program details and to submit your projects for consideration.

Attribution

Thanks to all who contributed to this article!

Nirmal Mehta

Brandon Mitchell

Adrian Goins

David Flanagan

Andy Clemenko

Sevi Karakulak

Matt Bentley

Rachel Leekin

Adrian Mouat

Top comments (0)