We've now gotten to a point that I have probably the most experience with, and that is with Packer. After Terraform, Packer was the next HashiCorp tool that I picked up. Thankfully, the syntax moved from JSON to HCL a couple of years ago at this point.
What is a Multipurpose Packer Templates
Before I get into the weeds, making hyper-generalized, multipurpose templates might be overkill. Really, my goal here is to write a base template which will work with multiple clouds and on premise hosting solutions. This of it as a way to jumpstart your image building process.
Really what I would like to try to do here is to build just a few templates which can handle all my Packer builds. Think of it as a module in Terraform. A module should be reusable, dynamic, and flexible.
My goal is two templates.
One for base builds.
One for app builds.
Base builds will run from a source cloud image or iso, then register with HCP Packer.
App builds will leverage HCP Packer to pick up the parent image and build upon it.
Additionally, I will only want a single template that will run both Windows and Linux builds. The easiest way to do this would be Ansible, then you can use the
Limitations
The downside with Packer are some of its limitations, specifically with meta-arguments.
As much as I would like to have a count on a provisioner, that is not an option.
Workarounds
So there are a couple of workarounds.
Dynamic
Dynamic blocks are similar to Terraform, where you can run a foreach
against it, and it will loop through the input.
Here is an example of a vsphere-iso source block where I loop a map variable to create add a disk to the image I am building. For example, if you want to have a disk just for the os and a disk for application data, you can now loop it in a dynamic block.
variable "vsphere_storage_config" {
description = "Configuration for vSphere storage."
type = map(object({
disk_size = number
disk_thin_provisioned = bool
}))
default = {
0 = {
disk_size = 50000
disk_thin_provisioned = true
}
1 = {
disk_size = 20000
disk_thin_provisioned = true
}
}
}
The variable gets injected into here and the result if leaving the default, this would create 2 disks. If you need more or less, you can specify in the respective var-file.
disk_controller_type = var.vsphere_disk_controller_type
dynamic "storage" {
for_each = var.vsphere_storage_config
content {
disk_size = storage.value.disk_size
disk_thin_provisioned = storage.value.disk_thin_provisioned
}
}
Additionally, you can use dynamic where you can as a toggle. For example, if you are or are not using HCP Packer, you can enable it or not.
dynamic "hcp_packer_registry" {
for_each = var.hcp_packer_enabled ? [1] : []
content {
bucket_name = var.vsphere_bucket_name
description = var.vsphere_bucket_description
bucket_labels = merge(var.vsphere_hcp_bucket_labels, {
"role" = var.role
"os" = var.os
"os_version" = var.os_version
"os_type" = var.os_type
}
)
build_labels = merge(var.vsphere_hcp_build_labels, {
"packer_version" = packer.version
}
)
}
}
Dynamic blocks are useful if they can be used, but for provisioners you will need to rely on other solutions.
Only
So for my HashiConf talk, I decided to use Ansible, since it can run on both Windows and Linux, but if using powershell vs shell provisioners in the same build, then it can be tricky.
This is the solution I came up with, and it has seemed to work so far.
# Probably not the best method to get this to work, but it works for now.
provisioner "shell" {
only = var.os_type == "linux" ? ["vsphere-iso.this"] : ["foo.this"]
scripts = var.build_shell_scripts
environment_vars = var.build_shell_script_environment_vars
valid_exit_codes = var.build_shell_script_exit_codes
execute_command = var.build_shell_script_execute_command
expect_disconnect = var.build_shell_script_expect_disconnect
}
provisioner "powershell" {
only = var.os_type == "windows" ? ["vsphere.this"] : ["foo.this"]
scripts = var.build_powershell_scripts
environment_vars = var.build_powershell_script_environment_vars
use_pwsh = var.build_powershell_script_use_pwsh
valid_exit_codes = var.build_powershell_script_exit_codes
execute_command = var.build_powershell_script_execute_command
elevated_user = build.User
elevated_password = build.Password
execution_policy = var.build_powershell_script_execution_policy
}
Additional Support
There are a couple of other ways I keep my templates generalized.
Really the biggest thing is the structure and layout of my repository I build my images from.
I have a Packer directory which has three subdirectories.
- builds - where my packer templates live - base and app.
- pkrvars - where my var-files live which make the templates plug and playable.
- shared - shared are anything which are can be used across multiple packer builds, no matter the source (aws, azure, gcp, vsphere) or the os type and distribution(linux or windows, rhel or debian).
The tree below is a work in progress, which I will later reference, but it is close to the final product.
└── packer
├── builds
│ ├── app
│ │ ├── build.pkr.hcl
│ │ ├── hcp.pkr.hcl
│ │ ├── locals.pkr.hcl
│ │ ├── plugins.pkr.hcl
│ │ ├── source_aws.pkr.hcl
│ │ ├── source_azure.pkr.hcl
│ │ ├── source_gce.pkr.hcl
│ │ ├── variables_aws.pkr.hcl
│ │ ├── variables_azure.pkr.hcl
│ │ ├── variables_common.pkr.hcl
│ │ └── variables_hcp.pkr.hcl
│ └── base
│ ├── build.pkr.hcl
│ ├── locals.pkr.hcl
│ ├── plugins.pkr.hcl
│ ├── source_aws.pkr.hcl
│ ├── source_azure.pkr.hcl
│ ├── source_gce.pkr.hcl
│ ├── variables_aws.pkr.hcl
│ ├── variables_azure.pkr.hcl
│ ├── variables_common.pkr.hcl
│ └── variables_hcp.pkr.hcl
├── pkrvars
│ ├── os
│ │ ├── linux
│ │ │ ├── debian
│ │ │ │ ├── base.pkrvars.hcl
│ │ │ │ ├── nomad.pkrvars.hcl
│ │ │ │ ├── packer.pkrvars.hcl
│ │ │ │ ├── terraform.pkrvars.hcl
│ │ │ │ └── vault.pkrvars.hcl
│ │ │ └── rhel
│ │ └── windows
│ │ └── base.pkrvars.hcl
│ └── sources
│ └── sources.pkrvars.hcl
└── shared
├── README.MD
├── bootstrap
│ └── bootstrap_win.txt
├── files
├── playbooks
│ └── readme.md
└── scripts
├── debian
│ ├── base.sh
│ ├── deprovision-aws.sh
│ ├── deprovision-azure.sh
│ ├── deprovision-google.sh
│ ├── docker.sh
│ └── nomad.sh
├── rhel
└── windows
├── base.ps1
├── deprovision-aws.ps1
├── deprovision-azure.ps1
└── deprovision-google.ps1
Shared Resources
When using shared scripts or files, I would recommend using environment_vars
on the provisioners and have them specific for the build or source platform you will build on.
For files, use the templatefile function to make a more dynamic file based on the image. So instead of multiple userdata files or autounattend files, you use the templatefile
function to customize it.
I am going to call it a night here. My little one is teething and deciding to keep us up at night, so I might try to get to bed.
My newest Packer project is here: plug-and-play-packer.
In this I have all three major clouds - AWS, Azure, GCP. I will also try to get vSphere in, as well.
But more on that later, wife will get on me for being up late soon.
cheers, lykins.
Top comments (0)