4 Ways Docker Changed the Way Software Engineers Work in Past Half Decade

Geshan Manandhar on December 01, 2018

10 years back it was Git that transformed the way software engineers worked. Half a decade back it was Docker that brought the container to the m... [Read Full]
markdown guide
 

Nice post but a few counter points.

Ship the whole stack, not just code

This has a bunch of caveats associated. It doesn't remove dependency management issues. If your software needs to talk to something else then both need to agree on a protocol. These days people use protobuf to define the message format. So when you build a new container everything in production needs to agree on the protobuf message formats and docker doesn't automatically solve it. There are many other examples so containers being self contained isn't exactly true. There are a lot of gotchas attached.

Allocate only needed resources to the application and scale horizontally

How do you only allocated a slice of a database? The system is as scalable as the least scalable component and the database is usually that component. Even if you can scale your frontend from 10 containers to 100 containers if your database can't deal with the load then you've potentially made things worse.

Security is baked in

Containers share the host kernel so they're less secure than virtual machines which virtualize the kernel as well.

Deploy faster with zero downtime

This isn't a property of containers. You can have zero downtime deployments with plain old processes not running in containers. Zero downtime deployments are an architectural property that are not tied to containers. You can have zero downtime deploys with VMs just as easily.

 

I think these objections really are based on strange premises.

Docker has become the deployment format for applications because packaging at distribution level sucks arse. This has nothing to do with messaging, or do I miss the point here?

The point with scalability is, that scalability becomes modular. Scaling only parts of the application is possible. So you could use your resources more efficient if you have to.

Security is baked in, which means, you could restrict privileges at the application level.
Second: after one year having patches for the whole spectre-meltdown crap, how is this "a virtual machine is more secure than x" an argument at all? Virtual machines weren't any better off. The only "secure" separation - if that makes any sense - is physical separation with only one application running.

The only thing I tend to agree is about 0 downtime deployments.

 

This new technology released as open source by Amazon seems interesting: Firecracker micro VM

 
 

Thanks for your insights. I agree with some of it.

 
 

Not sure that git has transformed anything. Before git I used SVN and it also worked for me. Regarding Docker it's not a security tool at all as and zero downtime was always available with a couple of servers and load balancer. Strange article, but ok.

 

I have used CVS,SVN and git. If git hasn't transformed VCS what has then?

Yes ZDD was there since a decade but Docker and k8s made it easier.

 

What this completely ignores is the fact that without proper development of what you want to "contain" Docker, Kubernetes and the like will be useless.

I'd rather see an article leaving out the marketing blurbs and including the harsh reality.

To this day, even companies that provide "the cloud" or container services do not have a clear picture.

Containers do not solve anything, they are just a technical approach intended to get people thinking about what could be the right approach.

 
 

It is like get on hype train or die under it.

Containers do indeed help though, the adoption can be seen everywhere: from application production to software developing tools and automation.

 
code of conduct - report abuse