DEV Community

Alain Airom
Alain Airom

Posted on

Comparing HashiCorp Packer with Docker! Does it even make sens?

Some clarifications on Packer (not only) and following a discussion I had… 🤔

Recently in a discussion with a business partner, a question came out: why IBM spent and bothered to buy Hashicorp? There are lot’s of free tools out there? And why do we discuss about Packer? I do everything with Docker and it is more than sufficient!

Let’s see the bigger picture first… 👨🏻‍🏫

📦 TL;DR — What is HashiCorp Packer?

➡️ Standardized Image Creation

Packer is an open-source tool that allows for the creation of uniform, ready-to-use machine images (Golden Images) for various platforms (AWS AMIs, Docker images, OVF/VMDK, etc.) from a single source configuration file.

🔑 Key Utility
Packer’s primary utility is to ensure standardization and immutability of the infrastructure.

  • Infrastructure Immutability: Instead of modifying an existing server, the old instance is destroyed and replaced with a new one, built from the latest image. This simplifies updates and reduces the risk of configuration drift.
  • Deployment Speed: Server provisioning (bootstrap) is instantaneous because the base configuration, monitoring agents, and runtime are already “baked” into the image.
  • Built-in Security: Security patches and base configurations (system hardening) can be applied and tested at the image build time, ensuring that every launched instance is secure from the start.

❗OK, having established the concept of immutable infrastructure with Packer, let’s now examine how it acts as the crucial upstream component in a modern, automated ecosystem, integrating seamlessly with complementary tools like Ansible and Vault.

🔒 HashiCorp Vault: Centralized Secret Management

Vault is a system for identity, secrets, and encryption management designed to handle sensitive data (passwords, API keys, certificates, tokens, etc.) securely, dynamically, and centrally.

🔑 Key Utility
Vault’s objective is to solve the problem of secret sprawl and enable the creation of dynamic secrets.

  • Secure Storage: Secrets are stored in an encrypted manner, protected by strict and audited access mechanisms. The seal/unseal system ensures they are not accessible without the appropriate keys.
  • Dynamic Secrets: Vault can generate temporary credentials and passwords (e.g., for a database or a cloud service) on demand, which automatically expire after use. This significantly reduces the attack surface in case of compromise.
  • Encryption as a Service: It can provide an application data encryption service without the application itself having to manage the encryption keys.

🤝 Complementarity with Ansible

The complementarity with Ansible is strong in both cases:

  • Packer + Ansible: Ansible is commonly used as a provisioner within the Packer image construction process.
  1. Packer launches a temporary VM.
  2. Ansible executes playbooks to install and configure software, the operating system, and dependencies.
  3. Packer captures the configured machine as an image (AMI, VHD, etc.) and destroys the temporary VM.
  • Vault + Ansible: Ansible needs access to secrets (SSH keys, database passwords) to execute its configuration tasks. Vault plugins exist for Ansible, allowing it to retrieve secrets dynamically and securely directly from Vault at runtime, without ever storing them in plain text in configuration files or the version control system.

💰 Strategic Value of IBM’s Acquisition of HashiCorp (Packer & Vault )
The acquisition of Packer and Vault strengthens IBM’s position on two essential strategic pillars: cloud-native security and end-to-end automation.

  • Security and Compliance (Vault): Vault is the leading tool for Secret Management in Cloud Native environments. Its integration with IBM’s security offerings (notably QRadar and consulting services) allows for the delivery of an end-to-end security solution, which is indispensable for clients in regulated sectors.
  • DevSecOps Pipeline (Packer): Packer complements the automation ecosystem (Terraform for provisioning, Ansible for configuration). It enables IBM to offer DevSecOps pipelines where images are secured and standardized from the start, facilitating the adoption of the Red Hat OpenShift architecture and AI by improving the quality of deployment environments.
  • Identity Management (Vault): Vault naturally integrates with IBM’s Identity and Access Management (IAM) solutions, providing a unified platform for authentication and authorization in modern and hybrid environments.

The goal of integrating Packer, Ansible, and HashiCorp Vault is to eliminate storing any sensitive data (passwords, API keys, database credentials) in plaintext or even in files encrypted at rest (like Ansible Vault files) within your code repository. Instead, the secrets are retrieved dynamically and are short-lived.

Secure Workflow: Dynamic Secrets During Image Build using all tools in production

The process of building a secured “Golden Image” with dynamic secrets involves three main stages:

  1. Authentication: The Packer pipeline needs a secure method to authenticate with Vault. The best practice is using a machine-friendly method like Vault AppRole or Cloud Auth (e.g., AWS IAM, Azure MSI) to get a temporary Vault token.
  2. Provisioning: Packer launches the temporary VM (Builder) and passes the Vault token/credentials to the Ansible provisioner.
  3. Consumption: The Ansible playbook uses a dedicated module (like community.hashi_vault.vault_kv2_get) to retrieve the necessary secret from Vault using the temporary token, and then immediately uses that secret to configure the system.

Packer Configuration (Passing the Token)

In the Packer configuration, you ensure the required Vault token or AppRole credentials are provided to the Ansible provisioner as an environment variable (ANSIBLE_HASHI_VAULT_TOKEN) or as extra variables.

  • Conceptual Packer Configuration Snippet (HCL)
build {
  # ... (Builder configuration like 'source.amazon-ebs.web-server' from the previous example)

  provisioner "ansible" {
    playbook_file = "./ansible/image_provisioning.yml"

    # 1. Provide Vault Auth Token securely
    # NOTE: The Vault token (e.g., from an AppRole login) should be passed via 
    # a secure environment variable or a secret injection tool in your CI/CD system.
    # We use 'vault_token' here as a Packer variable.

    # Passing the Vault Token to Ansible as an environment variable
    ansible_env_vars = [
      "VAULT_ADDR=https://vault.mycorp.com:8200",
      "ANSIBLE_HASHI_VAULT_TOKEN=${var.vault_token}" 
    ]

    # Optional: ensure sensitive data is not logged by Ansible
    extra_arguments = ["--force-color", "--skip-tags", "debug", "--vault-password-file", "/dev/null"] 
  }
}

variable "vault_token" {
  type    = string
  default = "" # Token is injected by the CI/CD pipeline, not stored here
  # In production, this token is typically generated just before the Packer run.
}
Enter fullscreen mode Exit fullscreen mode

Ansible Playbook (Retrieving and Using the Secret)

The Ansible playbook now uses the token provided by Packer to connect to Vault and retrieve the secret dynamically.

  • Conceptual Ansible Playbook Snippet (image_provisioning.yml)
---
- hosts: all
  become: yes
  tasks:
    - name: Ensure Python is installed for Vault integration
      # The temporary VM needs the Python library for the Ansible Vault module
      apt:
        name: python3-hvac
        state: present
      when: ansible_os_family == 'Debian'

    - name: Retrieve the monitoring agent API key from Vault
      community.hashi_vault.vault_kv2_get:
        path: secret/data/monitoring/production
        secret: api_key

      register: monitoring_secret
      delegate_to: localhost # The lookup happens on the Ansible controller (Packer machine)
      run_once: true

    - name: DEBUG: Show the retrieved secret (Only for testing! Use `no_log: true` in production)
      debug:
        msg: "Retrieved Key: {{ monitoring_secret.value.api_key }}"


    - name: Configure the monitoring agent with the dynamic API key
      template:
        src: files/monitoring_agent.conf.j2
        dest: /etc/monitoring/agent.conf
        mode: '0600'
      vars:
        monitoring_api_key: "{{ monitoring_secret.value.api_key }}"

    # Once the provisioning finishes, the temporary Vault token expires 
    # (or is revoked), and the key is baked into the immutable image's configuration file.
Enter fullscreen mode Exit fullscreen mode

Security Benefit

  • Secrets Isolation: The sensitive API key is never checked into Git.
  • Dynamic Access: Ansible only has access to the secret for the few minutes needed to build the image (due to the token’s short TTL or AppRole policy).
  • Immutability: The final image contains the necessary configuration, but the raw secret is never exposed to the person running the build, nor is it stored in source control.

HashiCorp Packer vs. Docker

The comparison between HashiCorp Packer and Docker highlights two tools focused on image creation, but targeting different objectives and levels of abstraction in the infrastructure lifecycle.

| Characteristic          | HashiCorp Packer                                             | Docker                                                       |
| ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| **Primary Goal**        | Automated creation of **machine images** (Golden Images) for **multiple platforms** (VMs, Cloud, Containers). | Creation and execution of **isolated containers** from **Dockerfiles**. |
| **Artifact Type**       | Virtual machine images (AMI, VHD, OVA, etc.), **including Docker images**. | Container images (based on minimal OS layers) stored in a registry (Docker Hub, etc.). |
| **Abstraction Level**   | **OS and Infrastructure** (from the kernel to the base application). | **Process and Application** (shares the host kernel).        |
| **Definition Language** | **HCL** (HashiCorp Configuration Language) or JSON.          | **Dockerfile** (Docker-specific format).                     |
Enter fullscreen mode Exit fullscreen mode

✨ Strengths and Complimentaries

HashiCorp Packer

Packer is the tool of choice for Infrastructure Immutability and Multi-Cloud Portability of base images.

Strengths (Advantages)

  • Native Multi-Platform (Portability): Packer is designed to create identical images from a single source for varied targets (AWS, Azure, GCP, VMware, Docker, etc.). Changing the target only requires modifying the builder, not the provisioner.
  • VM Immutability: Allows for the creation of completely pre-configured virtual machine images (with OS, monitoring agents, runtime) for ultra-fast and reproducible deployment (the “Golden Image” concept).
  • Security and Consistency: Security updates and system hardening are applied at the image build time, ensuring that every launched instance is compliant from the start.

Where Packer could be an overkill or not adapted

  • Complexity for Containers Only: If the sole need is to build a Docker image, directly using a Dockerfile (with docker build) is often simpler and more native than introducing the Packer tool.
  • Artifact Management: Packer only builds the image. It does not manage the image lifecycle or its deployment (that is the role of Terraform or Kubernetes).

🚛 Docker (Engine and Images)

Docker is the standard tool for Containerization, offering lightness, isolation, and speed.

Strengths (Advantages)

  • Lightweight and Fast: Docker containers start in a few seconds because they share the kernel of the host operating system. Their small size allows for high application density per server.
  • Application Isolation: Offers sufficient isolation for most applications, encapsulating the application and all its dependencies in a consistent environment (hence the “it works on my machine” aspect).
  • Ecosystem and Community: Docker benefits from a vast community, a large number of resources, and is the de facto standard for orchestrators like Kubernetes.
  • Layered Management: The layered structure of Docker images allows for efficient reuse and optimized downloads (only modified layers are transferred).

Where Docker might come short

  • Imperfect Isolation: Isolation is not as strong as with a complete virtual machine. The shared kernel represents a potential attack surface if a critical vulnerability is discovered.
  • Container Limited: Docker only builds container images. It cannot directly create VM images (AMI, VHD) which remain essential for the base infrastructure (e.g., the Kubernetes nodes themselves).
  • Orchestration Complexity: Although containers are simple to use locally, managing them at scale in production requires a powerful orchestrator like Kubernetes or Docker Swarm.

Practical Packer Illustraion

🖼️ VM Golden Image Creation (e.g., AWS AMI)

This process uses a builder specific to a virtualization or cloud platform (like Amazon EC2) and provisioners to install all the necessary base software onto the OS. This creates a fully pre-configured “Golden Image” for VMs.

Goal: Create a secure, hardened Amazon Machine Image (AMI) for a web server.

  • Conceptual Packer Configuration (HCL)
packer {
  required_plugins {
    amazon = {
      source  = "github.com/hashicorp/amazon"
      version = "~> 1.0"
    }
  }
}

source "amazon-ebs" "web-server" {
  # 1. Builder: Defines the source OS and target platform
  region        = "us-east-1"
  source_ami    = "ami-0abcdef1234567890" # Base Linux OS image (e.g., Ubuntu LTS)
  instance_type = "t2.micro"
  ssh_username  = "ubuntu"
  ami_name      = "web-server-golden-image-{{timestamp}}"
}

build {
  name = "web-vm-build"
  sources = [
    "source.amazon-ebs.web-server"
  ]

  # 2. Provisioner: Installs and configures software on the temporary VM
  provisioner "shell" {
    inline = [
      "sudo apt-get update",
      "sudo apt-get install -y nginx",
      "echo 'NGINX installed and secured.' | sudo tee /var/www/html/index.html",
      # Example of security hardening provisioner:
      "sudo usermod -s /sbin/nologin defaultuser"
    ]
  }

  # 3. Provisioner: Optionally, use Ansible for more complex configuration
  provisioner "ansible" {
    playbook_file = "./playbooks/security_harden.yml"
  }
}
Enter fullscreen mode Exit fullscreen mode

Workflow:

  1. Packer launches a temporary EC2 instance (VM) using the source_ami.
  2. The shell and ansible provisioners execute commands to install NGINX and apply security hardening on the running VM.
  3. Packer shuts down the VM and creates a reusable AMI artifact from its final state.
  4. Packer terminates the temporary VM.
  5. This AMI is now the standardized “Golden Image” ready for quick deployment.

🐳 Application Image Creation (e.g. Docker Image)

Packer can also use a Docker builder to create a container image. This is often done to maintain a consistent build process and use the same provisioning tools (like Ansible) that are used for VMs, ensuring automation consistency across the organization.

Goal: Create a standard Docker application image, potentially bypassing a Dockerfile to use a standard provisioner.

  • Conceptual Packer Configuration (HCL)
packer {
  required_plugins {
    docker = {
      source  = "github.com/hashicorp/docker"
      version = "~> 1.0"
    }
  }
}

source "docker" "app-image" {
  # 1. Builder: Defines the base Docker image (like 'FROM' in a Dockerfile)
  image  = "ubuntu:20.04"
  commit = true # Commit the changes made by the provisioners
}

build {
  name = "backend-app-build"
  sources = [
    "source.docker.app-image"
  ]

  # 2. Provisioner: Uses Ansible to install the application and dependencies
  provisioner "ansible-local" {
    playbook_file = "./playbooks/install_app.yml"
    role_paths = [
      "./ansible-roles/backend-app-role"
    ]
  }

  # 3. Post-Processor: Tags and pushes the final image
  post-processor "docker-tag" {
    repository = "myregistry/backend-app"
    tag        = "1.0.0-{{timestamp}}"
  }
}
Enter fullscreen mode Exit fullscreen mode

Workflow:

  • Packer pulls the base Docker image (ubuntu:20.04).
  • The ansible-local provisioner runs the playbooks inside the temporary container to install the application code and its Python/Node dependencies.
  • Packer commits the container state, creating the final Docker image artifact.
  • The docker-tag post-processor tags and optionally pushes the image to a container registry.

Key Difference: In the Docker example, you are managing the application layer on a base OS, whereas in the VM example, you are managing the OS and infrastructure layer itself.

🎯 Conclusion: Packer and Docker are not mutually exclusive

In reality, Packer and Docker are often used together in a modern delivery pipeline:

  • Packer can be used to create the base VM image (host OS) on which the Docker engine will be installed, ensuring that all container host servers are standardized.
  • Packer can also be used as a wrapper (builder) for Docker images themselves, using the same build process and provisioning tools (like Ansible) as those used for VMs, which ensures consistency in automation practices.

They simply target different layers: Packer manages the lower layer (the OS and the VM), while Docker manages the upper layer (the containerized application).

The choice between the two depends on the desired level of virtualization. Do you want a full VM (historical choice, often for security or legacy) or a lightweight container (modern choice, for density and speed)? If you need both, use them together.

Links

Top comments (0)