Notes
Requires Docker and at least 4 GB RAM (see comment section bellow for further details)
Not applicable for single app
Why I use Docker images
Using version managers like pyenv and nvm are obsolete.
For example, virtualenv/pyenv only encapsulates Python dependencies.
Docker containers encapsulate an entire OS with several dependencies.
Docker already allows to pull several one-click images like python, nodejs, golang, etc ...
docker pull python:3.9.0
# Dockerfile
FROM python:3.9.0
Docker images and containers
A Docker image includes the elements needed to run an application as a container -- such as code, config files, environment variables, libraries and run time.
A Docker container can be seen as a replica or as a printout of that image.
For example, you can have one python image and multiple containers related to this image.
Isolation with the OS (Clean Install)
You may not have python installed on your OS and you can pull the python 3.9 docker image.
So you can only use python inside a docker container.
docker pull python:3.9.0
You may have the python 2.7 installed on your OS and pull the python 3.9 docker image.
They will not conflict together, but you can only use python 3.9 inside a docker container.
If you don't want the python 3.9 anymore , you just have to delete the docker container related to the python 3.9 image and the python 3.9 image itself.
Multiple Versions
You can pull the python 2.7 docker image and the python 3.9 docker image.
They will not conflict together.
Multiple Layers
You can pull the python 2.9 docker image and install additional dependencies.
# Dockerfile
FROM python:3.9.0
...
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
Upgrade - Switch versions
You can pull the python 3.8 docker image and then switch to the 3.9 image.
# Dockerfile
FROM python:3.8.0 -> FROM python:3.9.0
Example : python
# Dockerfile
FROM python:3.9.0
# Dockerfile
FROM python:3.9.0-alpine
Example : node
# Dockerfile
FROM node:14.16.0
# Dockerfile
FROM node:14.16.0-alpine
Latest comments (41)
Stop using Docker images and Use Linux Containers, lol jk.
Another alternative is nix with nix-shell, which can result in just-as-reliable reproducible development environments (especially when combined with direnv and lorri).
I couldn't imagine running a container for everything going on, on a day to day basis. I think my mac would actually explode and burn my house down.
I'd be curious to know how is your IDE integration experience?
Hi, I just used docker volumes, so it is easy to integrate with IDE (ex: node_modules)
I don't agree with your post. I use the Python from my system when developing and I use images for testing (sometimes) and for external services and software that I need (ElasticSearch, Mongo, PostgresSQL).
Docker is painful, slow and remote debugging is tricky.
I use nvm and pyenv with huge success most of the times, but I also know that it can be a huge struggle when these tools fail what they promise.
That's why I don't use Docker for these tasks but I often use it for starting a local test database with one command and throwing it away when I don't need it anymore.
But I also use Docker when I don't want to install a certain tool in development. Then I quickly write a
docker-compose.yml
and mount the development folder into the container. Then there is no need to rebuild the container for every change and I can run the tool with shell inside the container.Or...perhaps just use what works best for your workflow, no matter how many articles say you should "Stop using that one thing that literally does the job for you".
Docker is a regular part of my workflow, but no, I'm not going to spin up a Docker container every single time I want to run a Python file with particular packages. I'm quite proficient with virtualenv, and it works very well.
Meanwhile,
nvm
saved my hide when testing a PR in an Electron app in an actual user environment, which Docker is not (without a lot of wasted time building it, at least.)To say "only use Docker images for building" is like saying "always rent a food truck when you need to cook supper, rather than wasting your time cleaning and setting up your home kitchen." It's overkill. There's a place for both, and one cannot just replace the other.
But then, "don't use X, use Y" is virtually always fad-chasing, rather than fact-based.
I'd prefer mixing between Docker images and package manager. For daily use languages/ applications such as golang, python3.8 (create venv when needed), node, etc. I install them directly in the OS, for all other temporary components, which are used for specific purposes, I use docker and expose their ports to OS for comumication with my developing app.
Benefits from this way are it's easy for me to quickly run some command, python scripting, etc. no need to remember different syntax with docker, or turn docker on. Temporary docker containers and data could be easily destroyed as well after being used, which keeps my machine clean.
i just still not comfortable using Docker for Development purpose, especially when i coding and debugging, yes, because remote debugging not right choice for me.
BTW i always use Docker image to install things like database or service broker.
Man, I think I must be misunderstanding this docker craze. Like I once tried to implement a Vue.js development environment on docker, and it was such a mission to get it to run locally on docker. It took me half a day to put together from several different articles, and then it still required all sorts of hacks and tricks to run locally vs building for production, for instance. In the end I gave up, and installed node, npm and vue to my pc and carried on working like I always have, with zero headaches.
So I suppose, I'd like to know what it really means to replace dev environments with docker, if it doesn't mean that you're setting up a fresh project to run locally on a docker container, and then ultimately building to a docker container?
Nothing in our workflow requires docker for dev, but it is handy.
For example, we have a separate repository with branches that are environment names. Throw a war (or jar) into the right directory, and then
docker-compose up ....
That way, I can checkout a different branch, and I get the config set for a different environment, spin up my container and my version of the application, is connected to a different environment, so I can triage issues or test that my new build doesn't break something in that environment.
Sure, I could manage that in other ways, but some of our stuff runs in Tomcat, and I'd rather not have to worry about configuring my machine to match the config of some other environment, just so I can test stuff, when it's less than 10% of my time to test stuff like that anyway.
I did know a guy, that ran his whole IDE in a container, and used it to produce images. I think he was a little bonkers though.
Coz it also seems to me that if you're successfully able to set up a containerized dev environment, you're willingly adding another vector for bugs and problems. Like if you run into complicated bug on your containerized dev project, how do you know it's a coding bug and not a issue with your docker setup? And how do you explain your setup to the kind folks over on stackoverflow? I don't want to first have to type a 6 page essay on the configuration of my project, every time I'm over there, asking a question.