DEV Community


Posted on

Installing private Python packages in Docker images

In this tutorial, we’ll look at how to install a Python package stored in a private repo in a Docker image

The problem

Let’s say you have a set of common Python utilities that you use across a number of different projects.

For example, let’s say you have some scripts that generate random user data for the purpose of testing various applications, that you share across multiple projects

Copying these scripts across multiple code bases is a nightmare from the perspective of maintenance, but these utilities may contain proprietary code that cannot be put somewhere public. So where can you put them?

The solution

One option is to package your utilities into a Python package, store the package in a private Python repository, then have the Docker image that contains your projects install these utilities from your repository. That way, you can easily control versioning of these utilities across all projects that have a dependency on them.

In this tutorial, we’ll look out how to create a Python package, store it securely in a private repository (in this case, and write a docker image that authenticates your private Python repository, then pulls and installs your package.

For good measure, we’ll also store our built Docker image in

A practical example

For the purposes of this tutorial, we are going to build a simple tool that generates random user data, which we can use to populate APIs, perform tests, etc. Your utilities might do something totally different, but you can use the same approach to create a distributable package to do whatever you need, and store it privately.

Creating your project

Let’s start by creating our package. This tutorial assumes that you already have python3 and virtualenv installed. Let’s create a folder called data-generator and a virtual environment:

# create a directory and cd into it
mkdir data-generator && cd data-generator

# create a virtual environment, and activate it
python3 -m venv env
source env/bin/activate

Next, let’s create a subdirectory called package that will store our code

mkdir package

Let’s install Faker, a Python environment that generates random data

pip install Faker

And let’s create a file called in the package folder, which generates a random user every time it is called:

from faker import Faker
import json
fake = Faker()

def generate_person():

The above function simply outputs a dummy person in json format, that looks like this:

  "name": "Samuel Mendez",
  "email": "",
  "password": "*hMRsQBbK1",
  "address": "78287 Morgan Summit\nPhillipsstad, WV 38051"

Next, lets create a folder called bin in the root directory, and add a file called make-person. This is a simple script that will let you call your function from the shell

#!/usr/bin/env python
from package import process

Finally, lets create a file called in the root data-generator directory.

from setuptools import setup

    name='data-generator', # the name of the package
    packages=['package'], # contains our actual code
    description='a random person generator',
    scripts=['bin/make-person'], # the launcher script
    install_requires=['faker==2.0.2'] # our external dependencies

Our folder structure should now look something like this:


We’re now ready to build our package! Enter the following lines into the command line:

# install wheel (to build packages in the bdist_wheel format)
pip install wheel
# create the package
python bdist_wheel

The process will run, and a few additional folders will be created. The one we’re most interested in is the dist folder, which contains our built package:


You can check that our package works as expected by installing and running it as follows:

# install the local package
pip install dist/*
# call the script
{"name": "Alexandra Nelson", "email": "", "password": "*c706Hvc+H", "address": "625 Powers Orchard\nNorth Bonnietown, IN 04475"}

Uploading the package to our private repository

Now that we’ve built the package, we can upload it to our private repository. If you don’t already have a private repository to upload to, you can create one for free at by following the simple sign up process. Once you’ve logged into your account, click on Create new package in the left hand menu to see your private repository URL — make a note of this, as you’ll need it later.

Now that you’ve got a private repository, and a package, we can upload files to it. We do this using a tool called twine

# install twine
pip install twine
# upload your built package to your repository (update the URL as necessary)
twine upload  --repository-url dist/*

You’ll be prompted to enter credentials — just use the username and password you signed up for Packagr with. Once you’ve done that successfully, you’ll see your new package in your Packagr dashboard.

Installing your private package from inside a Docker container

Now that our package is securely stored in our repository, let’s create a docker image capable of installing it. It’s important to remember that you should never store raw credentials in your Dockerfile, or anywhere else in your code for that matter. So, you should pass your credentials as environmental variables at build time. With that in mind, let’s start by creating a file called requirements.txt in an empty folder, remembering to change the repository URL to your own :


A requirements file is just a list of packages for pip to install. In this case, we are adding our private package, data-generator and the public dependency, faker. Adding the --extra-index-url tells pip to look in our private repo, in addition to the public one.

Next, lets create our Dockerfile

FROM python:3.7-alpine
RUN echo "machine \
         "    login ${USER} \
         "    password ${PASS}" > /root/.netrc
RUN chown root ~/.netrc
RUN chmod 0600 ~/.netrc
COPY requirements.txt /requirements.txt
RUN pip install -r requirements.txt
CMD make-person

Let’s take a detailed look at this, one line at a time:

  • FROM python:3.7-alpine tells Docker the base image to use — in this case we are using the lightweight alpine python distro, mostly to save time/space
  • ARG USER and ARG PASS defines variables that we will provide at build time — specifically, our Packagr username and password
  • RUN echo... creates a file called .netrc which tells the image to use the Packagr username and password when connecting to The 2 following lines set the permissions of this file correctly
  • COPY requirements... copies the requirements file to the Docker image
  • RUN pip... installs the dependencies in our requirements file.
  • CMD make-person just calls the script defined in our at run time

Now that we have our Dockerfile, let’s build the Docker image, substituting in your Packagr username and password:

docker build -t dg-image --build-arg --build-arg PASS=changeme .

If all goes well, your docker image should build correctly! We can now upload it to our docker registry. Go back to the Packagr interface, click on Docker registry, and make a note of your Docker registry URL - it should be similar to your Python package repo URL, but with instead of the domain, and with the hash id in lowercase characters.

The first step is to login to docker — as usual, you’ll need to update the URL, username and password to your own

docker login -u -p changeme

Next, you’ll need to tag the image you just build (again, change the URL)

docker tag dg-image

Finally, you can push your tags:

docker push

You should now see your image in your docker registry in Packagr. You can now pull this docker image from any other machine you are logged into with this command:

docker pull

Top comments (1)

laurenttrk profile image

Hi @chris,
Thanks for your post, I was wondering if you are aware that your credentials are accessible in your built image, both in the build history and in the .netrc file in your image.