loading...

How to actually make your life easier with Docker

Joe Sweeney on October 31, 2019

When Docker came out in 2013, the benefits being touted were pretty clear. "Full isolation from host machine and other apps", "perfectly-reproducib... [Read Full]
markdown guide
 

Docker, or containerization, is one of these things that no one really cared about ten years ago. But now... damn I couldn't survive one dev day without it. All the hassles of working with different projects, simply gone. Let alone setting up different PHP versions correctly, which is a huge pain without having Docker containers.

 

Nobody cared about containerisation ten years ago, because it didn't exist ten years ago. :-)

 

Yes, but in 2008 you couldn't do anything with it, only kernel developers were hacking with it. It was in 2014 when LXC 1.0 was finished and cgroups coming to the kernel that paved the way for Docker.

 

"10 years" was not meant to specify the exact date when Docker was released, but in a more broader manner for container-solutions.

There were no mentionable container-solutions before Docker. All we had was VMs. Well, there were jails in FreeBSD and zones in Solaris, and I guess z/OS also had a very enterprisey solution, but none of them were useful for developers.

 

One of my favorite thing about docker is, there is so many images available. Let say I wanna spin up jenkins for a quick CI/CD, image is already there. Gitlab, it's there, and without much configuration as well!

 

"Full isolation from host machine and other apps"

I found this isn't actually true. I had to develop on some software of awful quality. It would also communicate with many systems. It had never had a decent dev set up. End points all over, database, config, hard coded, etc.

This was also high stakes. This was used to configure systems for communications governments use to coordinate military operations.

At that time, I don't know about now, docker would let anything connect to anything. It wasn't safe to run locally without the network pulled.

I ended up building a system around docker and docker compose to do things such as run the processes but listen to network events and fully manage ip tables applying output rules.

It's dangerous in some circumstances to say fully isolated or contained because it's not true.

Technically you can say, have no network then sure, I'm technically wrong but the standard modes are more messed up. In is isolated, you have to map or enable in but out isn't. Then you want out isolated but a few things allowed out it wouldn't allow you.

When you want two way network, one direction is allow deny the other is just allow it bruv.

I don't know about how it is now but this is an important consideration for legacy software. You want it truly isolated, but to be able to log when it tries to connect out then be able to make holes if need be. If you get it set up like that safely it's a life safer for software you can't trust.

 

Yeah, full isolation might be a stretch, especially with outbound connections. Thanks for the input.

 

I have to agree, I am finding more and more that Docker is offering solutions to problems quicker than me setting everything up manually.

The next problem is to stop spending so long trying out so many interesting images :)

 

I realised the power of Docker when I started using Hasura. Whenever I needed to add any functionality to the server, all I needed was to add the relevant configuration details to the docker-compose.yml.

Docker allowed me to use a single DigitalOcean droplet to run Hasura and a Node.js authentication app side by side.

 
code of conduct - report abuse