Think of "containers" for shipping goods: If you don't have those, you need to care about what you are supposed to ship. Technical devices will need to be handled in a different manner than, say, books or food.
With "containers", this all pretty much disappears. You do have a standard sized box that can extremely easy be stacked, carried around, liffted on a truck, shipped all around the world - and only the one filling the container and the one unpacking actually need to care what's inside.
With software containers, things are the same. Running a Java application is considerably different than in example running a node.js or Ruby On Rails server. Likewise, a RedHat Linux server system is subtly different to an Ubuntu or Debian server installation. There are a bunch of things an application (directly or indirectly) depends upon, and these things lead to an almost traditional "clash" between developers (who build code) and operations teams (who maintain running code in production systems): Application crashes on a production server, developer says "works on my system" - and everyone is having a hard time figuring out what exactly went wrong.
Enter containers: They try to establish a standardized way of how to packaging an application, including most (best case: all) required dependencies and make sure the operation team has a set of limited functionality (start a container, stop a container, update the container software to a newer version) required to fully operate an application without having to bother much about what technology is being used to build this application or which operating system the developer used for building it.
So from that point of view, containers add a bit more standardization compared to running a virtual machine - and make this process actually usable: You could do the same with a VM indeed, but to achieve the same thing containers can achieve, you would be required not to provide your operations people with an application to run on a VM but instead completely build and configure a VM template they can just import into whatever environment they use (VMWare, ...) and start it without second thought.
There's a load more to containers of course, but that should be the essence I guess...
This is what we use to call "application containers" that run a minimal environment tailored to support a single application (say Nginx or your bug tracker that connects to an external database). There is another (somewhat less common) type of container that runs almost an entire operating system and behaves like a VM, but instead of having virtualized hardware running a full OS, it shares the kernel with the host machine. This way you can have, say, a Fedora server environment on an Ubuntu machine without the overhead of a VM. On one of my previous jobs, I ran the latest Ubuntu LTS inside a container on a recent non-LTS Ubuntu workstation.
Sometimes the host OS exposes a "fake" kernel to the container, allowing it to behave as a different OS. This is how Solaris machines used to host Linux containers (not sure they still do it).
Well yes you are right. There are a load of different things and understandings of containers. Personally, from quite an abstract level I tend to understand containers as a mere "structural interface" between operations and development that needs to be just as flexible as the environment itself is heterogenous. Back in Java EE days we used to have "all Java EE 1.6", devs were using a Glassfish application server locally and the same was running on the server. In such an environment, the Java EE application packages (such as .war or .ear files) do pretty well as containers. This doesn't work anymore, however, as soon as you got to deal with node.js or Python in your development and deployment tool chain. If you need this, your interface between development and operations needs to be capable of managing this. Recent container technologies seem pretty good at exactly that. ;)
Man this is amazing!! Thank you so much for taking time to reply.. I think you killed it!
Do you prefer any containers provider to start with? Learning purposes for now..
You're welcome. :) I'd recommend to have a look at Docker and make your way through the digitalocean tutorial on that (digitalocean.com/community/tutoria...) which is extremely concise and should contain all to get you started. Good luck and feel free to ask... ;)
Five years old? Okey.
A Container is a box with all the pieces required to use a single toy. You can have multiple containers to combine your toys and create more better toys. Some boxes only have the pieces and you need to have multiple boxes to create a single toy.
A virtual machine is like having a box with the pieces to use a toy, but you all need eat your vegetables first each time you want to use the toy.
Docker is the name of a possible boxes where your toy can come. Is the most used and looks like a whale.
Azure Container Service and similar options from AWS and GCP are glue you can use to combine your Docker boxes to create better toys
Wonderful description. Thank you. Now I think I understand better the difference myself.
The idea of using containers is to avoid all the issues you may have when you develop, for example with windows and then you upload your content to a linux based system. There are a lot of things that may go wrong (for example, symboly links, routes, OS specific functions). So you run all your code inside the container, so it runs exactly the same in you computer and in the cloud.
As I see it, is a cross-platform solution for developers.
Thank you for your reply, so isolated environments, but when you said cross-platform, do you mean that if I have a container running Nodejs on a Linux VM, I take the same container and run it on a Windows machine?
More or less, yes. The idea, given your example, would be that the VM would be inside the container and easily configurable to run on the Windows VM environment.
So you can have your very own transport ship, and you can fill it with all the nasty cargo you like, but have to constantly worry about refueling and not sinking it (owning a server).
You can also ask someone to man that ship, but he's going to transport other people as well, so he'll let you have your own cargo truck on board his ship to pack it with whatever you like. You can have any truck brand, and you don't need to worry about the ship sinking anymore, but you still have to worry about the truck breaking down, and there will be other people onboard, driving around their own trucks, also you'll have no idea how big the ship is, and it can limit you even though you know your truck very well (having a VM).
Finally, you can have someone man the ship, and you get your own little box onboard it. You don't need to worry about sinking, fueling and machinery in general, and your box is much easier to onboard than a full-sized truck, but you're also confined to the box,
the ship will be packed full with other people's boxes and you'll have no idea how big the ship is (having containers).
Very good explanation Gergely, Thank you.
An explanation for 5: Containers are a kind of package which has everything in it to run in your things which are known as applications. So whenever you want to run your things in a completely isolated environment, you seek for these packages which have everything in it.
Now you can take this package and run your things wherever you want to. No need to make a similar environment just like yours everywhere to run your things in the same manner.
An explanation for 10: Containers are the simple process which guarantees you isolation from other processes. It's more like a different machine you are working on but running on the same machine(same host and kernel). I wrote two posts for docker you can go there and can learn with me. :)
Thank you Rushal, I'll check your blog :).
I don't know how to explain it to a five year old but I had the exact same question as you only a couple months ago. I found docs.docker.com/get-started/ and started running through tutorials. From there I dockerized everything and it has paid off.
If I were to sum up what a container is. A container is a running instance of a process or processes that is designed to be destroyed. You heard correct. Containers. Are. Immutable. They don't store data permanently and should never be treated that way. Imagine you wanted to setup a web server to start developing a web application. Ok, you need NodeJS, Git, GCC, Make, Nginx or similar, and maybe PHP. Now you have to configure them. Git clone, npm install, setup nginx.conf, run your npm start, hope it works when you come back.
Now you're done. Gotta tear it down and destroy the virtual machine lest we start mixing dependencies from other projects, losing configs, etc. I guess you could bash script or ansible the whole thing but its 2018 and we aren't animals.
Enter docker. Now you can setup an "image" with all your runtime dependencies like NodeJS and Nginx. You might be tempted to "inject" your project by using git clone during image building but again this goes against the principle of immutable containers. Instead, setup a volume VOLUME ["/usr/src/app"] and now you can start the container, telling docker to point your cloned project on the server to "/usr/src/app" inside the container docker run -v /home/name/project:/usr/src/app repo/image:version.
docker run -v /home/name/project:/usr/src/app repo/image:version
Now you're done again. You want to setup a new server with a new NodeJS project using the same dependencies. Literally just install docker, clone your project, run docker with -v again and thats it. Hope this helped :D
The Children's Illustrated Guide to Kubernetes (a container management system):
why w'd you use it while you can run your apps on a Virtual Machine!
why w'd you use it while you can run your apps on a Virtual Machine!
This actually easy to argue. Would you ship 800MB vm image or 4MB container image?
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.