DEV Community

tomwerneruk
tomwerneruk

Posted on • Originally published at fluffycloudsandlines.blog on

Creating a LAMP AMI using Packer and Salt

This is the first in a series of posts to create an Infrastructure as Code powered deployment of WordPress running on Amazon Web Services.

This one's going to be pretty short, partly down to the great tools on offer!

What create an image? Why Packer? Why Salt?

Creating images with as much of your installation and configuration baked are vital in DevOps environment, where predictability and agility is key. For example, if you have an autoscaling group which is creating a pool of WordPress application servers, installing the Apache, PHP and MySQL Client using a deployment script would mean a node would take too long to enter service. Whereas preparing a customised AMI will mean that the time to enter service is only restricted by the length of time to start the EC2 instance.

Packer is part of the wider Hashicorp toolset for controlling the cloud via IaC. Packer can create images for a wide range of cloud and on-premise platforms. Being part of the same family of tools there is a degree of similarity between how they work and your infrastructure code will be written.

Salt is one several configuration tools in the market. I have no real allegiance to any other tool after learning Puppet, Ansible and Salt. Greenfield environments are pretty rare, so you may well be restricted with your current toolset. The concepts used here are portable to other configuration tools, of which Packer supports many!

Packer and Salt Quickstart

Packer is extremely easy to get started with, like the rest of the Hashicorp products, it is simply a case of download, extract and run. Head over to my YouTube Channel to watch my how-to video.

Clone the tutorial repo from https://gitlab.com/fluffy-clouds-and-lines/packer-and-salt-lamp-ami. You should have a structure that looks like this;

├── aws_vars.json
├── README.md
├── salt_tree
│   └── srv
│   ├── pillar
│   │   ├── apache.sls
│   │   ├── mysql.sls
│   │   └── top.sls
│   └── salt
│   ├── apache
│   ├── mysql
│   ├── php
│   └── top.sls
└── template.json

aws_vars.json

This is used to provide variable data to avoid having to specify it on each invocation

salt_tree

Contains the Salt declarations to install and configure are LAMP stack components. This is copied by Packer to the remote host during the build, then executed by Salt.

template.json

The Packer build declaration that specifies the base AMI to use, and how Salt should be invoked during the build process.

Packer Run

Running Packer is as simple as it gets. After cloning the repo, all you need to decide is to whether put your credentials into a file or not.

To prompt (or use your AWS CLI credentials if configured), from the project root, run;

packer build template.json

or, if you have created a variables file;

packer build -var-file=./aws_vars.json template.json

The build should take around 5 minutes, to create the base instance, apply the Salt configuration and generate a final AMI.

Inside template.json

Packer templates have 3 main sections (excluding Post-Processors which are optional and are for specific use-cases);

Variables

Used to abstract from your code sensitive or dynamic information that should be provided to the script.

Builders

The heavy lifting of creating a machine with an appropriate base image to start building upon, and post provisioning, wrapping it up ready for use.

Provisioners

The way to actually to apply changes to your build, using scripts, configuration management tools (Chef, Puppet, Ansible, Salt etc).

Checkout the Packer documentation for the latest list of available Builders and Provisioners.

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  }
...
}

Our variables are declared so they can be used in subsequent parts of the template. They can be provided at runtime, JSON file, environment variables, Consul or Vault (cool eh?).

{
...
  "builders": [
    {
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "eu-west-2",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "*ubuntu-bionic-18.04*",
          "root-device-type": "ebs"
        },
        "owners": [
          "099720109477"
        ],
        "most_recent": true
      },
      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "wordpress-ha-node {{timestamp}}"
    }
  ],
 ...
}

Our builder specifies we want an EBS backed AMI, based upon the latest Ubuntu 18.04 image.

{
...
  "provisioners": [
    {
      "type": "salt-masterless",
      "local_state_tree": "./salt_tree/srv/salt",
      "local_pillar_roots": "./salt_tree/srv/pillar",
      "salt_call_args": "pillar='{\"role\":\"builder\"}'"
    },
    {
      "type": "shell",
      "inline": [
        "rm -rf /srv/salt",
        "rm -rf /srv/pillar"
      ],
      "execute_command": "sudo sh -c '{{ .Vars }} {{ .Path }}'"
    }
  ]
}

We use two provisioners here (you can have multiple per build), that run in sequence. The first provisioner uses Salt in masterless mode (no central server), uploads the pre-defined Salt tree to the host and applies it. The Second provisioner is to get round Terraform Bug #20323 which stops Terraform re-running Salt Masterless provisioner against this image.

Inside the Salt Tree

I have found Salt a strange beast to learn, in some cases it is very easy to understand, but some of the terminology takes time to get used too. This is not designed to be a full Salt intro, but more explain the decisions taken with the Salt definition this project uses.

Salt is a configuration management tool that takes configuration files and uses them to apply a desired state to a system. Salt at a high level uses States to define how to apply the configuration, and Pillars to provide variable data. It is similar to Packer having a template and a separate variables file.

/srv/salt/top.sls is the leader of the show here. It defines which States apply to any given Salt machine.

base:
  'role:builder': 
    - match: pillar # Match on 'role' passed in as additional Pillar data via salt_call_args
    - php
    - php.mysql
    - php.mysqlnd
    - apache
    - apache.config
    - apache.vhosts.standard
    - mysql # We don't need MySQL Server (using RDS instead), but can't be removed presently due to bug
    - mysql.config
    - mysql.client

The top.sls used in our tree;

  • Will apply PHP, Apache and MySQL states to the host. Each corresponding folder is a prebuilt Salt state called a Salt Formula. All were sourced from https://github.com/saltstack-formulas.
  • 'role:builder' is a filter to decide which hosts to apply state too. In our next tutorial you will see how we use the same tree for different purposes based on role.
  • Each list item i.e php or mysql.client is a folder in the Salt tree. Periods mark subfolders. It is quite common for larger formulas to be split out like this.

/srv/pillar/top.sls is our Pillar configuration root. This;

  • Defines configuration for the states to be applied by /srv/salt/top.sls.
  • Some Salt Formulas have defaults that are sensible and therefore will not have a corresponding Pillar entry.

You will notice in our Packer Provisioner block we provide the salt_call_args with a value of "pillar='{"role":"builder"}'". This provides supplementary Pillar information that can then be used to decide which parts of the /srv/salt/top.sls are applied. There is a lot of flexibility around this, and there are other methods to filter this file.

I am certainly no Salt expert here. I have used pre-built Salt Formulas here and wired them together to create my desired setup.

Wrapping Up

If you've got this far, you should hopefully had a very quick intro to Packer and Salt, and successfully manged to build an image, like so;

amazon-ebs: -------------
    amazon-ebs: Succeeded: 27 (changed=17)
    amazon-ebs: Failed: 0
    amazon-ebs: -------------
    amazon-ebs: Total states run: 27
    amazon-ebs: Total run time: 69.343 s
==> amazon-ebs: Provisioning with shell script: /tmp/packer-shell598436168
==> amazon-ebs: Stopping the source instance...
    amazon-ebs: Stopping instance
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating AMI wordpress-ha-node 1558710754 from instance i-08f3e01d313901737
    amazon-ebs: AMI: ami-0477fc2kjw982c28e81
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
eu-west-2: ami-0477fc2kjw982c28e81

Happy infrastructure coding! Comments and questions welcome below!

Latest comments (0)