DEV Community

Cover image for Docker new Download Rate Limits
Batel Zohar for JFrog

Posted on • Updated on

Docker new Download Rate Limits

The new Docker announcement could be a bit confusing, but in this blog post, I’ll try to summarize it and make it simpler to understand. On November 1st, Docker is planning to add a new subscription level, and here’s how this may affect us.

The Problem

There are two main issues Docker users will now be facing: new pull request limitations, and the image retention policy.

Pull request limitations

This new limitation means that we can create 100 pulls for anonymous users and 200 pulls for authorized users every 6 hours.

To better understand why rate limits were introduced, Docker found that most Docker users pulled images at a rate you would expect for normal workflows. However, there is an outsized impact from a small number of anonymous users. For example, roughly 30% of all downloads on Hub come from only 1% of our anonymous users.

The challenge is when pulling an existing image. Even if you don’t download the layers, this pull request will still be counted.

From Scaling Docker to Serve Millions More Developers: Network Egress

From Scaling Docker to Serve Millions More Developers: Network Egress

Image retention policy

Images stored in free Docker Hub repositories that have not had their manifest pushed or pulled in the last 6 months, will be removed at the mid of 2021. This policy does not apply to images stored by paid Docker Hub subscription accounts, Docker verified publishers or official Docker Images.

Let's take the following example of a free subscription user who pushed a tagged image called "batelt/froggy:v1" to Docker Hub on Oct 21, 2019. If this tagged image was never pulled since it was pushed, it will be considered inactive by mid 2021 when the new policy takes effect. The image and any tag pointing to it will be subject to deletion.

According to their wiki, Docker will also be providing tooling, in the form of a UI and APIs, that will allow users to easily manage their images.

The Solution

My favorite solution is to use JFrog Artifactory which is an artifact repository manager. This will allow us to store and protect our Docker images within a private Docker registry, keeping them in our cache using a remote repository,, reducing our requests to Docker hub and using our local images kept in cache as shown in the following diagram.

Alt Text

Why should we use Artifactory and how it works

JFrog provides us with the ability to host our own secure private Docker registries and proxy external Docker registries. It even provides us with a smart checksum-based storage.
storage, which utilizes storage to maximum potential.

  1. Choose Artifactory version (Cloud or On-prem)
  2. Create Docker repositories (local remote and virtual)
  3. Configure repository advanced configuration

Artifactory version

If you don’t want to manage Artifactory you can just use the SaaS version which is free and limited. Or you can use the JFrog container registry that supports Docker and Helm repositories

Create a Docker repository

Alt Text

Configure repository advanced configuration

Now we can easily Configure repository advanced options like deciding how long before Artifactory checks for a newer version of a requested artifact in a remote repository and use our local caching so we will save our docker requests:

Alt Text

Learn more about Artifactory Pro that contains 27 different package types like Maven NPM and much more.

Top comments (0)