DEV Community

Cover image for Folders, Apartments, and Fake Computers: A Guide to Virtual Environments, Docker, and VMs
Lawrence Murithi
Lawrence Murithi

Posted on

Folders, Apartments, and Fake Computers: A Guide to Virtual Environments, Docker, and VMs

Introduction

If you have been spending a substantial amount of time writing code, you must have run into a frustrating problem: "It works on my computer, but it doesn't work on yours."
This happens because computers are set up differently. You might have a different operating system, a different version of a programming language, or different background software running. When a website or app breaks because of this, developers can lose hours or even days trying to figure out what the problem is.
To solve this, developers came up with ways to isolate software. Instead of installing an app directly onto your main computer, you put it inside a protective bubble. This bubble tricks the software into thinking it has its own private space, with exactly what it needs to run, so it won't mess with the rest of your system.
There are three main tools we use to create these bubbles; Virtual Environments, Virtual Machines (VMs) and Docker. While they all aim to solve similar problems, they do it in completely different ways, using completely different layers of your computer.
Let's break down exactly what each one is, how they compare and when you should use them.

1. Virtual Environments

A Virtual Environment is a localized directory that contains a specific version of a programming language and the specific software packages required for a project. It is the simplest and lightest way to isolate a project and is most commonly used in Python (using tools like venv or virtualenv) although similar concepts exist in other languages.

How Virtual Environments work

A Virtual Environment provides no system-level isolation. It does not share hardware, nor does it isolate the OS. It simply changes the PATH variables in your terminal so that when you install a package or run a script, it uses the isolated folder instead of the computer's global system files.
Imagine you are building two different websites on your laptop. Website A is older and needs version 2.0 of a web framework like Django. Website B is brand new and needs version 4.0 of that exact same framework. If you install these tools directly onto your main computer system, they will conflict and one of your websites will stop working.
A virtual environment fixes this by creating a dedicated, private folder for your project. When you turn on(activate) the virtual, it temporarily rewrites your computer's internal GPS, known as the system PATH. Because of this, your computer temporarily ignores its main, global list of tools. Instead, it only looks at the tools installed inside that specific project folder.

Pros

• Extremely fast - Creating and starting a virtual environment takes less than a second because it is just moving some folders around.
• Lightweight - It only takes up a few megabytes of space on your hard drive. There is no heavy software running in the background.
• Simple to use - Usually, it just takes one or two simple commands in your terminal to get started and shut down.
• No dependency conflicts - it solves the problem of dependency conflicts between different projects

Cons

• Weak isolation - It only isolates programming packages (like Python libraries). It does not isolate the operating system, the system clock, or your hardware settings.
• "It works on my machine" can still happen - Because the isolation is weak, hidden problems can sometimes slip through. If your code secretly relies on a specific font or a hidden system tool installed on your Mac, and you send your virtual environment code to a friend on a Windows PC, the code might still break.

Virtual environments are used on local computer on day to day coding when working on multiple projects using the same programming language but want to keep their dependencies separate from one another.

2. Virtual Machines (VMs)

A Virtual Machine is a complete software emulation of a physical computer. It runs its own full Operating System (Guest OS) entirely separate from the host computer's Operating System. It is the heaviest, most complete, and oldest form of isolation. Software like VirtualBox, VMware, or Microsoft Hyper-V allows you to do this.

How Virtual Machines work

If a virtual environment is like putting your code in a separate folder, a Virtual Machine is like buying an entirely new physical computer, shrinking it down, and putting it inside your current computer.
It uses a piece of software called a Hypervisor(like VMware, VirtualBox, or Hyper-V). The hypervisor carves out a specific amount of your physical computer's RAM, CPU, and storage and dedicates it to the VM. You then install a full Operating System (like Windows or Ubuntu) onto that carved-out space. This new system is called the Guest OS which operates/behaves like a real computer while the main computer is called the Host.

Pros

• Complete isolation - What happens inside a VM stays inside a VM. Because the hypervisor locks the hardware, if a VM gets infected with a severe virus, your main host computer is almost always completely safe.
• Run different operating systems - You can run a full Windows computer inside a Mac, or a Linux computer inside Windows, allowing you to use software made for different platforms.
• Highly secure - Because the hardware is strictly separated at a deep level, it is trusted by banks, governments, and massive corporations for highly sensitive tasks.

Cons

• Massive resource hog - Since you are running a second operating system on top of your current one, VMs eat up a lot of RAM, CPU power, and battery life. Even if the VM is just sitting idle, it is still running background updates, managing a clock, and keeping a digital desktop alive hence wasting power.
• Huge files - A VM can easily take up 20 to 100 gigabytes of storage space just to hold the basic operating system files.
• Slow - Booting up a VM takes just as long as turning on a physical computer, and moving files in and out of it can be tedious.

VMs are used in large corporate cloud servers or on a local machine when strict security is needed. Its critical when you need to test software on a completely different operating system, or when a business is running older, legacy applications that require an outdated OS to survive.

3. Docker (Containers)

Docker is a platform that uses containerization to package an application and all its necessary dependencies (libraries, frameworks, etc.) into a single, standardized unit called a container. Containers are the clever middle ground between the lightness of a Virtual Environment and the strict, heavy isolation of a Virtual Machine.

How docker work

Every operating system is made of two main parts; the core engine (Kernel), which physically tells your RAM and CPU what to do, and the user files/tools that make up the desktop experience you see on screen.
While a Virtual Machine duplicates both parts making it so heavy, Docker only duplicates the user files and tools. All Docker containers share the main host computer's Kernel.
Think of it like an apartment building. A Virtual Machine is like giving everyone their own separate house with their own separate plumbing and electricity. Docker is like an apartment complex where everyone has their own locked, private room(container) and can decorate however they want, but they all share the building's central plumbing and electrical systems hidden in the walls(Host OS Kernel).

To use Docker, you write a simple text file called a Dockerfile. It reads like a recipe; Start with a bare-bones version of Linux, set up some default database passwords, download the latest PostgreSQL and start the database server. Docker reads this file and packages it into a container. This container can be handed to anyone, and it will run exactly the same way, regardless of what computer they have.

Pros

• Consistent everywhere - It solves the "it works on my machine" problem perfectly. A Docker container behaves exactly the same on a Mac, a Windows PC or a cloud server because the environment inside the container never changes.
• Fast and lightweight - Because they don't boot up a full operating system kernel, containers start in seconds and usually only take up a few hundred megabytes of space.
• Easy to share and scale - You can run dozens or even hundreds of containers on the same computer without them fighting over resources. This allows developers to build microservices. Instead of building one massive app, you put the shopping cart in one container, the user login in another, and the payment system in a third. If the payment container crashes, the rest of the website stays up.

Cons

Steeper learning curve - You have to learn Docker-specific terminal commands, how to write Dockerfiles and how networking works to let containers talk to each other.
• OS limitations - Because Docker shares the host's kernel, you generally run Linux containers on Linux machines. Although Linux can run containers on Mac and Windows, Docker usually installs a tiny, hidden Linux Virtual Machine in the background to provide the Linux Kernel making Docker slightly heavier on Mac and Windows than it is on native Linux.
• Less secure than VMs - Because containers share the host kernel, the wall between them is thinner hence a critical vulnerability in the host OS could potentially affect all containers.

Docker is used almost everywhere. On a developer's laptop, in automated testing environments, and in production running live websites on the open internet. Its used when building modern web applications, working with a team of developers who all use different computers, or breaking a large app down into smaller microservices.
It gives developer's an isolated, highly reliable environment that is identical across all machines, without wasting your computer's RAM and hard drive space.

Similarities between the tools

The core similarity between all three is the concept of isolation.
They all exist to create boundaries between projects and software.
They also all make it easier to delete a project without leaving junk files behind; you just delete the virtual environment folder, the VM file, or the container image, and everything associated with that project is instantly gone, leaving your main computer perfectly clean.
Most times, they are often used together in the real world. A large company might run a giant Virtual Machine in the cloud to provide security, put Docker inside that Virtual Machine to manage different web apps easily, and a developer might use a Virtual Environment inside that Docker container to organize their Python code.

The Major Differences

The difference lies in how much they isolate and how heavy they are.
• Virtual Environment (Lightest) - Isolates only the language packages but relies entirely on your computer for everything else.
• Docker (Middle) - Isolates the application and the operating system files, but shares the core OS engine (the kernel) to save power and speed.
• Virtual Machine (Heaviest) - Isolates absolutely everything. It clones the physical hardware and runs a 100% separate operating system, taking up a lot of space and power to provide maximum security.
Isolation tools

Conclusion

If you are just writing a quick Python script to scrape a website, analyze some data, and need to install a few libraries without breaking your computer, use a Virtual Environment.
If you are building a web app, working with a database, collaborating with other developers, and need to make sure your code runs exactly the same way on your laptop as it will on your company's live servers, use Docker.
If you are on a Mac but absolutely need to run a piece of Windows-only enterprise software, or you are testing dangerous malware and need maximum security to protect your real computer, use a Virtual Machine.

Top comments (0)