DEV Community

Sumeet Dugg
Sumeet Dugg

Posted on

My Local Python Setup Was Quietly Destroying Our Team's Productivity. Docker Fixed It.

How I moved our Python interpreter into a Docker container, wired it into VS Code, and never looked back.

Let me be upfront with you, I didn't want to write this article.

Not because the topic isn't worth it — it absolutely is — but because writing it means admitting that I let a fixable problem drag on for almost five months before I actually fixed it. Five months of "works on my machine." Five months of onboarding friction. Five months of my team losing hours to an issue that, once solved, took an afternoon.

So consider this both a technical walkthrough and a cautionary tale from someone who waited too long.

The Environment That Slowly Ate Our Standup

Picture this: three developers, one Python codebase, and a Monday morning that starts with a Slack message at 9:04 AM.

"Hey, the pipeline script is throwing a ModuleNotFoundError. Did something change?"

Nothing changed. Nothing ever changed. That was the whole problem.

We had a requirements.txt. We had virtual environments. We had a well-intentioned README with setup instructions that were already six weeks out of date. What we did not have was any guarantee whatsoever that the Python interpreter running on my Windows machine was looking at the same world as the one running on my colleague's MacBook or our third teammate's Ubuntu workstation.

Every developer's local machine is an ecosystem — years of installs, PATH entries, conflicting system packages, half-removed tools that left ghosts behind. You don't notice the weight of it until it starts affecting other people.

The final straw came on a Wednesday afternoon. We were two days from a demo. I'd spent the morning writing a data transformation function that worked perfectly. I pushed it. Twenty minutes later I got a message: "This is crashing for me immediately."

Same code. Different machine. Different Python minor version — 3.10.4 vs 3.10.11 — and a single library that handled a deprecation differently between them.

I spent three hours on that. Three hours, two days before a demo, on an environment problem.

That evening I opened a new branch and set up Docker.

Why Docker and Why VS Code Dev Containers Specifically

The idea is not complicated once you accept it: stop treating your interpreter as something that lives on your machine, and start treating it as something that lives in your project.

Docker lets you write a Dockerfile — essentially a recipe — that describes an environment precisely. Operating system, Python version, every dependency, every configuration detail. Docker builds that recipe into a container: an isolated, reproducible box. Your code runs inside that box. It runs the same way on every machine. It runs the same way in six months when someone dusts off the repo.

VS Code's Dev Containers extension is what makes this actually pleasant to work in. Instead of running Docker separately and juggling a local editor alongside it, VS Code connects directly into the container. Your terminal, your debugger, your IntelliSense, your interpreter — everything operates from inside the container. From the outside it feels exactly like working locally. From the inside, every single person on the team is running identical environments.

That's the shift that mattered to us: the environment stops being invisible infrastructure that lives on individual laptops and becomes an explicit, version-controlled part of the project. When the environment breaks, you fix the Dockerfile and you commit the fix. Everyone gets it.

The Setup, Done Properly

Let me walk through exactly what I built. I'll explain the reasoning behind each piece rather than just dropping files at you.

Project structure when we were done:

my-project/
├── .devcontainer/
│   └── devcontainer.json
├── Dockerfile
├── requirements.txt
└── main.py

Enter fullscreen mode Exit fullscreen mode

The Dockerfile

#Use an official Python runtime as a base image

FROM python:3.14.3-slim

#Optional: tools you may want during development

RUN apt-get update 
&& apt-get install -y --no-install-recommends git curl 
&& rm -rf /var/lib/apt/lists/*

#Set the working directory in the container

WORKDIR /app

#Copy the requirements file into the container (if you have one)

COPY requirements.txt ./

#Install any needed Python packages specified in requirements.txt

RUN pip install --no-cache-dir -r requirements.txt

#Copy code last (better layer caching)

COPY . .
Enter fullscreen mode Exit fullscreen mode

A few deliberate choices here worth explaining.

I used python:3.14.3-slim rather than*python:3.14.3*. The slim variant strips out things like documentation files and build tools that we don't need at runtime. It keeps the image smaller and faster to pull, which matters when someone is spinning this up for the first time on a slow connection.

The WORKDIR /app appline sets the working directory inside the container to a clean, predictable path. When VS Code mounts your project into the container, it maps to this location. No path confusion, no surprises.

Copying requirements.txt before anything else is intentional. Docker builds in layers, and it caches each layer. If your requirements haven't changed but your code has, Docker doesn't re-run pip install on every rebuild — it uses the cached layer. This makes iteration fast.

Requirements File

I added list of libraries, i was willing to use

fastapi
uvicorn
pydantic
numpy
pandas
scikit-learn
langchain
openai
requests
python-dotenv
Enter fullscreen mode Exit fullscreen mode

Building docker image

  1. Open cmd or visual code terminal .

    1. Go to project files where it have Dockerfile from cmd for example cd .
  2. Write command show below:

docker build -t my-python-interpreter:3.14.3 . 

Enter fullscreen mode Exit fullscreen mode

What above Command Does ??

  1. It builds a Docker image from your project files and gives name to it.

Step-by-Step Explaination

  1. docker build - Create an image using the Dockerfile in this directory.

  2. -t - It lets you name your image .

In our case my-python-interpreter is the name of image and 3.14.3 is the version(tag)

Running Docker python interpreter image from VScode

Let start adding docker image interpreter to VScode

  1. Open VScode, open project files where it have dockerfile

  2. Hit Ctrl + Shift + p , Write Dev containers: Add Dev Container to text box, Below shown screen will appears.

  1. Click on Dev containers: Add Dev Container Configuration Files...., below shown screen will appears.

  1. Click on Add conguration to workplace and press enter

  2. it will create dockerfile configration file in your project.

Next Step run container

Running container in VScode terminal code

  1. open terminal in VScode from Menu → Terminal → New Terminal

  2. Write command below:

docker run -it my-python-interpreter:3.14.3
Enter fullscreen mode Exit fullscreen mode
  • docker run - Create + start a new container from an image

  • -i - Interactive , keeps STDIN open and allow to type commands inside container

  • -t - Create + start a new container from an image

  • my-python-interpreter - Image name

  • 3.14.3 - Version name (Tag)

3. Press Ctrl + Shift + P

4 . Write Dev Containers: Attach to Running Container...

5. Press Enter .

6. New window will open , now you can check you added interpreter from ternimal and check python interpreter

7. Write command which python and which pip, you should see something like below:

Congrulations ! Interpreter added successfully

If you want to understand about docker images and containers you can follow article: Understand docker images and containers

What Changed for the Team

I rolled this out on a Thursday. By Friday morning, every developer had pulled the branch, reopened in the Dev Container, and was running code.

Here's the before and after that actually mattered:

Onboarding a new developer, before: roughly four hours. Install Python, set up a virtual environment, install dependencies, hit a conflict, debug the conflict, realise a system package was interfering, fix that, update the README with the step we'd missed.

Onboarding a new developer, after: clone the repo, open in VS Code, click "Reopen in Container," wait three minutes for the image to build, done. The fourth developer we brought on set herself up entirely independently without asking anyone a single question.

Debugging environment issues, before: a ritual of sharing Python versions, pip list outputs, PATH variables, and educated guesses.

Debugging environment issues, after: there are no environment issues. If the code runs in the container on my machine, it runs in the container on yours. When we do find a configuration problem, we fix the Dockerfile, push it, and everyone rebuilds. One fix, universally applied.

The most underrated change was what disappeared from our standups. We used to start every session with at least one throwaway exchange about someone's environment. That's gone. We talk about actual work now.

The Honest Takeaway

The environment is now a first-class part of the project. It lives in the repository. It's reviewed in pull requests. When someone joins the project in eight months, they don't inherit our laptop histories — they get a clean, defined, reproducible box. That's what professional software development should look like.

If you're still fighting your local Python setup — if your standup has a regular guest appearance from "it works on my machine" — this is the fix. It costs an afternoon. It pays back immediately.

If this article saved you debugging time or helped your team, — Questions about the setup? Leave a comment and I'll answer every one.

Top comments (0)