Managing Docker containers locally can sometimes feel like a juggling act, especially when your workstation's resources are stretched thin. The good news? You can host your own Docker provider in the cloud (AWS or GCP) and leverage Terraform to manage Docker images and containers remotely, all without the need for Docker Desktop on your local machine.
In this guide, I’ll walk you through hosting a Docker provider on AWS or GCP and demonstrate how to use Terraform to deploy and manage containers effortlessly. By the end, you'll have a hosted Docker provider acting as your container hub, giving you centralized control and a lightweight local setup. This set up is specially useful when data science workers need to share a single container run time . In a follow up article I shall also show how we can also setup a shared visualization platform.
Why Host a Docker Provider in the Cloud?
- Offload Resource Usage: Free up your local machine’s CPU and memory by hosting Docker remotely.
- Centralized Management: Use Terraform to manage containers declaratively from anywhere.
- Scalability: Seamlessly scale up resources (e.g., adding more CPUs or memory to your cloud instance).
- No Local Docker Installation: Manage Docker containers entirely through Terraform without requiring Docker Desktop.
- Better Collaboration: Share your hosted Docker environment with teammates.
Step 1: Set Up a Docker Host
We’ll create a virtual machine (VM) in either AWS or GCP and install Docker on it. Let’s get started:
1.1 On GCP
Run the following command to create a VM instance with Docker installed:
gcloud compute instances create docker-host \
--image-family=debian-11 \
--image-project=debian-cloud \
--machine-type=e2-medium \
--tags=docker-server \
--metadata=startup-script='#! /bin/bash
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker'
This will:
- Spin up a VM.
- Install Docker.
- Start the Docker daemon.
1.2 On AWS
Run this command to launch an EC2 instance with Docker pre-installed:
aws ec2 run-instances \
--image-id ami-0c55b159cbfafe1f0 \ # Amazon Linux 2 AMI
--count 1 \
--instance-type t2.micro \
--key-name YourKeyPair \
--security-groups default \
--user-data '#!/bin/bash
yum update -y
amazon-linux-extras install docker -y
service docker start
usermod -a -G docker ec2-user
chkconfig docker on'
Step 2: Enable Remote Docker API
To allow Terraform to communicate with Docker, you need to enable Docker’s remote API by binding the Docker daemon to a TCP socket.
On your VM, edit the Docker service configuration:
sudo sed -i 's/-H fd:\/\/-H fd:\/\/ -H tcp:\/\/0.0.0.0:2375/' /lib/systemd/system/docker.service
sudo systemctl daemon-reload
sudo systemctl restart docker
Important:
- This configuration exposes the Docker API without authentication. For production setups, enable TLS or restrict access with a firewall.
Step 3: Configure Firewall Rules
3.1 On GCP
Allow access to Docker’s API (port 2375):
gcloud compute firewall-rules create allow-docker-tcp \
--allow tcp:2375 \
--source-ranges=0.0.0.0/0 \
--target-tags=docker-server
3.2 On AWS
Allow traffic on port 2375 for your EC2 instance:
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 2375 \
--cidr 0.0.0.0/0
Step 4: Write a Terraform Script
Create a main.tf
file to manage Docker resources via Terraform:
provider "docker" {
host = "tcp://<VM_PUBLIC_IP>:2375"
}
resource "docker_image" "nginx" {
name = "nginx:latest"
}
resource "docker_container" "nginx" {
image = docker_image.nginx.latest
name = "example-nginx"
ports {
internal = 80
external = 8080
}
}
resource "docker_volume" "shared_data" {
name = "shared_data"
}
resource "docker_container" "data_container" {
image = "busybox"
name = "data-container"
volumes {
volume_name = docker_volume.shared_data.name
container_path = "/data"
}
command = ["/bin/sh", "-c", "while true; do sleep 3600; done"]
}
Replace <VM_PUBLIC_IP>
with the public IP of your VM.
Step 5: Deploy Docker Resources with Terraform
- Initialize Terraform:
terraform init
- Preview Changes:
terraform plan
- Apply Changes:
terraform apply
Terraform will:
- Pull the
nginx:latest
Docker image from Docker Hub. - Start an NGINX container with port 8080 on your VM mapped to port 80 in the container.
- Create a shared Docker volume.
Benefits of Using Terraform Without Local Docker
- No Local Setup Required: Manage everything remotely using Terraform and a cloud-hosted Docker provider. No need for Docker Desktop or a local Docker installation.
- Consistent Environments: Avoid the "works on my machine" problem by centralizing Docker management on the cloud.
- Declarative Management: Use Terraform to define your desired state, ensuring reproducibility and collaboration.
- Persistent Data: Shared volumes provide a way to persist data across containers and sessions.
- Simplified Collaboration: Share Terraform scripts and configurations with your team for consistent setups.
When to Use This Approach
- Team Environments: For collaborative development where team members need consistent Docker environments.
- Lightweight Local Setup: If you want to avoid installing Docker or managing images locally.
- Integrated Infrastructure Management: If you’re already using Terraform to manage other cloud resources.
Summary
Hosting your Docker provider on AWS or GCP gives you the freedom to manage containers declaratively, offload resource usage, and ensure consistency across environments. Combined with Terraform, this setup provides a powerful, scalable, and collaborative approach to Docker resource management.
By avoiding a local Docker setup, you simplify development workflows and reduce dependencies on local resources. For Node.js developers or full-stack teams, this approach offers a modern, cloud-first way to work with Docker efficiently.
Try it out and see how it simplifies your development workflow!
Top comments (0)