DEV Community

Cover image for Terraform Project Design — A parallel with Puppet

Terraform Project Design — A parallel with Puppet

Introduction

Like Puppet, Terraform provides low-level objects written in a programming language that allows you to manage individual resources.

With Puppet, these low-level objects, the Puppet Types and Providers, written in Ruby, allow you to apply the CRUD paradigm to resources located on managed nodes.

With Terraform, these low-level objects, the Terraform Resources, written in Go, allow you to apply the CRUD paradigm to resources in an API.

On top of this, both solutions provide a DSL: the Puppet DSL for Puppet and the HashiCorp Configuration Language (HCL) for Terraform. They are both declarative languages, allowing you to organize your code for higher re-usability and maintainability.

It's all about abstraction and convergence

As in Puppet, HCL is only a wrapper around the low-level resources, the only objects that actually impact your infrastructure.

These resources ensure the convergence of your infrastructure. All other objects only allow you to organize your code in multiple abstraction layers: classes and defined types in Puppet; modules in Terraform.

In Puppet, you could do something like this:

node 'mynode' {
  user { 'foo':
    ensure => present,
  }
}
Enter fullscreen mode Exit fullscreen mode

However, if you want code re-usability and ease maintenance, you should add abstraction layers in between the node and the user resource.

Since 2012 and Craig Dunn's famous blog post about Designing Puppet, the most described (and probably used) pattern in Puppet code is to use public modules from the Puppet forge, then add 2 levels of abstraction on top of it with Roles and Profiles.

In Terraform, you could also do very simple things without any level of abstraction:

data "aws_ami" "ubuntu" {
  most_recent = true

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"

  tags = {
    Name = "HelloWorld"
  }
}
Enter fullscreen mode Exit fullscreen mode

However, if you want to maximize re-usability, you probably want to add some abstraction layers, similar to the Roles and Profiles in Puppet. Terraform allows you to group your resources into modules that can than be instantiated multiple times in your code.

The Roles and Profiles pattern in Puppet usually implements the following layers:

node → role → profiles → component module → resources

In Terraform, we do not manage nodes, so the abstraction stack starts with the workspace instead:

workspace → root module → composition module → resource module → resource

Analogies

Puppet vs Terraform abstractions

Puppet Types vs Terraform Resources

Terraform resources are like Puppet Types: they are the only low-level objects that really do something in your API (or on your Puppet node).

Puppet nodes vs Terraform workspaces

With Puppet, the entry point of your code is the node object which automatically includes the Main class.

Terraform implicitly creates a default workspace for your stack, which instantiates the root module where you can declare your infrastructure.

Puppet Roles vs Terraform root modules

The Roles and Profiles design pattern in Puppet suggests to assign one and only one Role class to your node.

In Terraform, you don't really have the choice because you automatically have one and only one root module per workspace.

Puppet Profiles vs Terraform composition modules

In Puppet, you would code your business logic into Profile classes so that you can reuse it, probably with different parameters, in your various Roles.

In Terraform, you could code your business logic into composition modules that you will instantiate in your root module.

Puppet Forge vs Terraform registry

Just like Puppet has its Forge, Terraform provides a registry for anyone to share their resource modules, which represent the first level of abstraction.

Examples

Puppet

Let's provision a node running a Docker engine and a Traefik reverse proxy. The role will include both classes.

Code Entry Point

The code entry point in Puppet is the manifests/site.pp file. There are multiple ways to achieve node classification in Puppet, one of which is to store the node's role in its certificate. It will then be exposed as a trusted fact.

# manifests/site.pp
$role = $trusted['extensions']['pp_role']
node default {
  include "roles::${role}"
}
Enter fullscreen mode Exit fullscreen mode

Role class

# modules/roles/manifests/docker_traefik_dev.pp
class 'roles::docker_traefik_dev' {
  include profiles::docker
  include profiles::traefik
}
Enter fullscreen mode Exit fullscreen mode

Profile classes

# modules/profiles/manifests/docker.pp
class 'profiles::docker' {
  class { 'docker': }
  class { 'docker-compose': }
}

# modules/profiles/manifests/traefik.pp
class 'profiles::traefik'(
  Boolean $enable_docker = true,
) {
  class { 'traefik':
    enable_docker => $enable_docker,
  }
}
Enter fullscreen mode Exit fullscreen mode

Component modules

Here, we are using 3 component modules that could be published on the Puppet Forge:

  • docker
  • docker-compose
  • traefik

Terraform

In this Terraform example, we create an AWS VPC and an EKS cluster.

Root module

The Terraform root module is the entry point of your code, which declares your stack. Here, it only includes a composition module.

# main.tf
module "cluster" {
  src = "registry.example.com/example-corp/k8s-cluster/aws"

  vpc = {
    name            = "my-vpc"
    cidr            = "10.0.0.0/16"
    azs             = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
    private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
    public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
  }

  eks = {
    cluster_name    = "my-cluster"
    cluster_version = "1.18"
    worker_groups = [
      {
        instance_type = "m5a.large"
        asg_max_size  = 5
      }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

Composition module

The composition module is a reusable piece of code that contains your business logic. Here, it creates a VPC and deploys an EKS cluster on every instantiation.

# variables.tf
variable "vpc" {
  type = map(object({
    name            = string
    cidr            = string
    azs             = list(string)
    private_subnets = list(string)
    public_subnets  = list(string)
  }))
}

variable "eks" {
  type = map(object({
    cluster_name    = string
    cluster_version = string
    worker_groups   = list(any)
  }
}


# main.tf`
module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = var.vpc.name
  cidr = var.vpc.cidr

  azs             = var.vpc.azs
  private_subnets = var.vpc.private_subnets
  public_subnets  = var.vpc.public_subnets

  enable_nat_gateway     = true
  single_nat_gateway     = false
  one_nat_gateway_per_az = true

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = "1"
  }

  public_subnet_tags = {
    "kubernetes.io/role/elb" = "1"
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.cluster.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = var.eks.cluster_name
  cluster_version = var.eks.cluster_version
  subnets         = module.vpc.private_subnets
  vpc_id          = module.vpc.vpc_id
  worker_groups   = var.eks.worker_groups
}
Enter fullscreen mode Exit fullscreen mode

You probably want maximum testing of this composition module, such as linting, syntax validation, integration tests…

Resource Module

Here, we are using two public modules available on the Terraform registry (but of course you could load any module from any supported source):

Rules

Common rules for the Puppet's Roles and Profiles design pattern are:

  • A node includes one role, and one only,
  • A role includes one or more profiles to define the type of server,
  • A profile includes and manages component modules to define a logical technical stack,
  • Component modules manage low-level resources,
  • Modules should only be responsible for managing aspects of the component they are written for.

In Terraform, we could also do some analogies regarding this rules:

  • A workspace includes one root module, and one only. Here you don't really have the choice because Terraform imposes this to you.
  • A root module includes one or more composition modules to define the type of infrastructure,
  • A composition module includes and manages resource modules to define a logical technical stack,
  • Resource modules manage low-level resources,
  • Modules should only be responsible for managing aspects of the component they are written for.

Conclusion

Even though Puppet and Terraform have different purposes, they share similarities and can benefit from the same architecture best practices.

Top comments (1)

Collapse
 
ragnarkon profile image
Bryan Woolsey

Nice! Solid article.

I personally find Puppet's classes/defined types to be more flexible than Terraform's modules. But that may just limitations of the cloud provider APIs, or just the fact that Puppet has been around for 15+ years rather than Terraform's ~5 years.