<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David J Eddy</title>
    <description>The latest articles on DEV Community by David J Eddy (@david_j_eddy).</description>
    <link>https://dev.to/david_j_eddy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/david_j_eddy"/>
    <language>en</language>
    <item>
      <title>My thoughts on the HashiCorp Infrastructure Automation Certification</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Thu, 28 Oct 2021 16:17:30 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/my-thoughts-on-the-hashicorp-infrastructure-automation-certification-3ngd</link>
      <guid>https://dev.to/david_j_eddy/my-thoughts-on-the-hashicorp-infrastructure-automation-certification-3ngd</guid>
      <description>&lt;p&gt;As the landscape technologies that keep the internet running has changes over the past 60 years so have the tools that manage the technology. Now in 2021 Terraform is one of the leaders for managing cloud resources as code, commonly called &lt;code&gt;Infrastructure as Code&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I started using Terraform (TF) around version 0.11 back in 2017. The first project published with TF code was in 2018 (&lt;a href="https://github.com/davidjeddy/wordpress-terraform" rel="noopener noreferrer"&gt;https://github.com/davidjeddy/wordpress-terraform&lt;/a&gt; if you are curious). As my skills with Terraform have matured, so has the tool. But the core life cycle remains the same: write, plan, apply. Even as the tool passed the 1.x &lt;code&gt;production ready&lt;/code&gt; release milestone; the core workflow remained unchanged.&lt;/p&gt;

&lt;p&gt;Recently I found that &lt;a href="https://www.hashicorp.com/" rel="noopener noreferrer"&gt;Hashicorp&lt;/a&gt; has started providing certifications related to there tools; of course I jumped on the Terraform study track. The study plan was to go through the ACG resources, complete the &lt;a href="https://www.hashicorp.com/" rel="noopener noreferrer"&gt;Hashicorp&lt;/a&gt; resources, review my knowledge, and execute the test. Having taking a number of certifications the last step was going to be the easy one.&lt;/p&gt;

&lt;p&gt;Here is the list of resources I used to to study.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.acloud.guru/course/using-terraform-to-manage-applications-and-infrastructure/dashboard" rel="noopener noreferrer"&gt;https://learn.acloud.guru/course/using-terraform-to-manage-applications-and-infrastructure/dashboard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.acloud.guru/course/hashicorp-certified-terraform-associate-1/dashboard%0A" rel="noopener noreferrer"&gt;https://learn.acloud.guru/course/hashicorp-certified-terraform-associate-1/dashboard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.hashicorp.com/certification/terraform-associate%0A" rel="noopener noreferrer"&gt;https://www.hashicorp.com/certification/terraform-associate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.hashicorp.com/collections/terraform/certification" rel="noopener noreferrer"&gt;https://learn.hashicorp.com/collections/terraform/certification&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The exam topics covers everything in the Terraform lifecycle from the subcommands, to state manipulation and a working knowledge of the Cloud / Enterprise offerings from &lt;a href="https://www.hashicorp.com/" rel="noopener noreferrer"&gt;Hashicorp&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Due to world events the exam was online proctored. Finding a place to take the exam was a non-issue for me due to current living arrangements. The monitoring personnel are _very_ strict about adhering to the guidelines. You have been warned.&lt;/p&gt;

&lt;p&gt;With all that said, I passed with a score in the mid 80s. Not amazing, but not bad either. Two weeks after the exam I was watching the &lt;a href="https://hashiconf.com/global/" rel="noopener noreferrer"&gt;HashiConf Global 2021&lt;/a&gt; and saw that only about 12,000 certs had been issued globally so far. That means I am one of the first 15,000 to have a certification from &lt;a href="https://www.hashicorp.com/" rel="noopener noreferrer"&gt;Hashicorp&lt;/a&gt;. Woot woot!&lt;/p&gt;

&lt;p&gt;Would I recommend this certification? If you like to validate your knowledge and increase your salary; yes. 100% yes. Especially if you work in the cloud, IT infrastructure, or application development.&lt;/p&gt;

</description>
      <category>hashicorp</category>
      <category>terraform</category>
      <category>sre</category>
      <category>certifications</category>
    </item>
    <item>
      <title>How to: AWS Service Endpoints via Terraform for fun and profit</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Wed, 01 Apr 2020 15:35:18 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/how-to-aws-service-endpoints-via-terraform-for-fun-and-profit-ba1</link>
      <guid>https://dev.to/david_j_eddy/how-to-aws-service-endpoints-via-terraform-for-fun-and-profit-ba1</guid>
      <description>&lt;p&gt;&lt;a href="https://blog.davidjeddy.com/2020/04/01/how-to-aws-service-endpoints-via-terraform-for-fun-and-profit/" rel="noopener noreferrer"&gt;Originally posted on my blog.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recently I found myself designing a system that had AWS Lambda functions inside a private VPC. But I needed to pass a payload from the output of the Lambda function to an AWS service that had to be publicly routable (specifically to &lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;SQS)&lt;/a&gt;. I found there are really only three options to solve this situation:&lt;/p&gt;

&lt;h2&gt;The Options:&lt;/h2&gt;

&lt;h4&gt;1) &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html" rel="noopener noreferrer"&gt;NAT Instance (Good)&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This solution involves operating a compute instance to act as a network address translator (NAT) resource. When resources inside the private subnet needs to access a public DNS the traffic is routed through the NAT instance. This has the obvious disadvantage of needing to run a compute instance and being limited to the hardware related to it. This adds management and cost overhead that I really did not want to deal with. While the AMI Marketplace has pre-configured images available I still did not want to manage additional hardware for one Lambda that is invoked sporadically.&lt;/p&gt;

&lt;p&gt;Here is what a NAT Instance network configuration looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.aws.amazon.com%2Fvpc%2Flatest%2Fuserguide%2Fimages%2Fnat-instance-diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.aws.amazon.com%2Fvpc%2Flatest%2Fuserguide%2Fimages%2Fnat-instance-diagram.png" alt="&amp;lt;br&amp;gt;
        NAT instance setup&amp;lt;br&amp;gt;
      "&gt;&lt;/a&gt;Credit: &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;2) &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html" rel="noopener noreferrer"&gt;NAT Gateway&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;The better option is to leverage the NAT Gateway service. Imagine if you let AWS operate a NAT instance super-cluster with the additional benefit of lower cost to operate, easier setup, and higher network through put. The down side is the inability to use Security Groups with it This is the recommended solution for current NAT requirements going forward per AWS.&lt;/p&gt;

&lt;p&gt;Here is what a NAT Gateway network configuration looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.aws.amazon.com%2Fvpc%2Flatest%2Fuserguide%2Fimages%2Fnat-gateway-diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.aws.amazon.com%2Fvpc%2Flatest%2Fuserguide%2Fimages%2Fnat-gateway-diagram.png" alt="&amp;lt;br&amp;gt;
          A VPC with public and private subnets and a NAT gateway&amp;lt;br&amp;gt;
        "&gt;&lt;/a&gt;Credit: &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;3) &lt;a href="https://docs.aws.amazon.com/general/latest/gr/rande.html" rel="noopener noreferrer"&gt;Service Endpoint (Best)&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;The new kid on the block, Service Endpoints enable the ability to access supported services from within a private subnet with major benefits over NAT implementations. Imagine if you connected a network cable from your private subnet directly to the publicly routed resource. AWS does this via the an &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html" rel="noopener noreferrer"&gt;Elastic Network Interface (ENI)&lt;/a&gt; resource to the private subnet. The ENI even takes up an IP address in the CIDR range of the private subnet. &lt;/p&gt;

&lt;p&gt;This solution have three big benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Traffic stays inside your VPC, never traversing the public internet. Thus faster, cheaper, and more secure.&lt;/li&gt;
&lt;li&gt;Similar to a NAT instance, Service Endpoints can have Security Groups applied to them.&lt;/li&gt;
&lt;li&gt;The infrastructure to operate and manage a Service Endpoint is incredibly minimal. Saving time, money, and operational effort.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the solution I wanted! Service Endpoints checks all the requirement boxes I had.&lt;/p&gt;

&lt;p&gt;*Side Note: Service Endpoint Interfaces are an AWS service implementations of the &lt;a href="https://aws.amazon.com/privatelink/" rel="noopener noreferrer"&gt;Private Link&lt;/a&gt; feature. Service Endpoint Gateways are only available for S3 and DynamoDB. The Terraform configuration is minimally different between the two.&lt;/p&gt;

&lt;p&gt;Here is what a Service Endpoint network configuration looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.aws.amazon.com%2Fglue%2Flatest%2Fdg%2Fimages%2FPopulateCatalog-vpc-endpoint.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.aws.amazon.com%2Fglue%2Flatest%2Fdg%2Fimages%2FPopulateCatalog-vpc-endpoint.png" alt="Amazon VPC Endpoints for Amazon S3 - AWS Glue"&gt;&lt;/a&gt;Credit: &lt;a href="https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Lets Terraform This Bad Boy!&lt;/h2&gt;

&lt;h4&gt;&lt;a href="https://aws.amazon.com/vpc/" rel="noopener noreferrer"&gt;VPC&lt;/a&gt;&lt;/h4&gt;

&lt;p&gt;Leveraging &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; (0.12.24 at time of writing) I configured a basic VPC, a single AZ with a private subnet, and a wide open Security Group. Very basic networking here; nothing special, the core building blocks of any VPC. Note the VPC does not have any NAT resources nor an Internet Gateway.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# Networking

## VPC

resource aws_vpc this {
  assign_generated_ipv6_cidr_block = false
  cidr_block                       = var.vpc_private_cidr
  enable_dns_hostnames             = true
  enable_dns_support               = true

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "vpc", random_string.this.result])
      Tech = "VPC"
      Srv  = "VPC"
    },
    var.tags
  )
}

## Route Table &amp;lt;-&amp;gt; Subnet associations

resource aws_route_table_association private_0 {
  subnet_id      = aws_subnet.private_0.id
  route_table_id = aws_route_table.private_0.id
}

## Route Tables

resource aws_route_table private_0 {
  vpc_id = aws_vpc.this.id

  depends_on = [
    aws_vpc.this
  ]

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "private-route", random_string.this.result])
      Tech = "Route"
      Srv  = "VPC"
    },
    var.tags
  )
}

## Subnets

resource aws_subnet private_0 {
  availability_zone               = var.availability_zone[0]
  vpc_id                          = aws_vpc.this.id
  cidr_block                      = var.vpc_private_cidr
  assign_ipv6_address_on_creation = false

  depends_on = [
    aws_vpc.this
  ]

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "subnet-a", random_string.this.result])
      Tech = "Subnet"
      Srv  = "VPC"
      Note = "Private"
    },
    var.tags
  )
}

 
resource aws_security_group private_lambda_0 {
  description = "Private Lambda SG"
  name        = join(var.delimiter, [var.name, var.stage, "private-subnet-lambda-0", random_string.this.id])
  vpc_id      = aws_vpc.this.id

  ingress {
    from_port   = 0
    to_port     = 65535
    protocol    = "tcp"
    cidr_blocks = [
      var.vpc_private_cidr
    ]
  }

  egress {
    from_port   = 0
    to_port     = 65535
    protocol    = "tcp"
    cidr_blocks = [
      var.vpc_private_cidr
    ]
  }

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "private-subnet-lambda-0", random_string.this.id])
      Tech = "Security Group"
      Srv  = "EC2"
    },
    var.tags
  )
}&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;&lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;SQS Queue&lt;/a&gt;&lt;/h4&gt;

&lt;p&gt;The first resource after the base VPC resources I needed to create was the SQS queue. Like many other services offered by AWS the queues has a routable FQDNs. Leverage proper security and IAM configuration! Principle of least privilege to secure _all_ your resources. Remember: security first.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;resource aws_sqs_queue dead_letter_queue {
  name = join(var.delimiter, [var.name, var.stage, "sqs-dead-letter", var.random_string.id])

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "sqs-dead-letter", var.random_string.id])
      Tech = "SQS"
      Srv  = "SQS"
    },
    var.tags
  )
}

resource aws_sqs_queue this {
  name                      = join(var.delimiter, [var.name, var.stage, "sqs", var.random_string.id])

  redrive_policy = jsonencode({
    deadLetterTargetArn = aws_sqs_queue.dead_letter_queue.arn
    maxReceiveCount     = 4
  })

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "sqs", var.random_string.id])
      Tech = "SQS"
      Srv  = "SQS"
    },
    var.tags
  )
}&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;Private &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;Lambda&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Next I created  a Lambda function; assigning it to the private subnet and the security group that are contained inside the VPC. The &lt;a href="https://github.com/davidjeddy/aws_terraform_lambda_vpc_endpoint/blob/master/terraform/lambda/private_lambda_0/src/index.py" rel="noopener noreferrer"&gt;Lambda code&lt;/a&gt; is Python based and as such I used &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html" rel="noopener noreferrer"&gt;Boto3&lt;/a&gt; to handle creating the HTTPS request that will place the message in the queue. This will not work initially since we have not created the Service Endpoint. &lt;/p&gt;

&lt;pre&gt;&lt;code&gt;## data

data archive_file this {
  type        = "zip"
  source_dir  = "${path.module}/src"
  output_path = "${path.module}/file.zip"
}

## resources

resource aws_lambda_function this {
  filename         = data.archive_file.this.output_path
  function_name    = join("-", [var.stage, var.name, "private-lambda", var.random_string.id])
  handler          = "index.lambda_handler"
  role             = aws_iam_role.this.arn
  runtime          = "python3.7"
  source_code_hash = data.archive_file.this.output_base64sha256

  # NOTE Need to pass the REGION and QUEUE_ARN to enable Boto3 to find the correct queue
  environment {
    variables = {
      AWS_ACCT_ID = var.aws_acct_id
      QUEUE_ARN   = var.aws_sqs_queue.arn
      REGION      = var.region
    }
  }

  # NOTE This places the Lambda inside a VPC into the subnet of choice
  vpc_config {
    security_group_ids = var.security_group_ids
    subnet_ids         = var.subnet_ids
  }

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "private-lambda", var.random_string.id])
      Tech = "Python_3_7"
      Srv  = "Lambda"
    },
    var.tags
  )
}

## IAM role, policies, and attachments

resource aws_iam_policy this {
  name   = join(var.delimiter, [var.name, var.stage, "private-lambda-policy", var.random_string.id])
  path   = "/"
  policy = file("${path.module}/iam/policy.json")
}

resource aws_iam_role this {
  assume_role_policy = file("${path.module}/iam/role.json")
  name               = join(var.delimiter, [var.name, var.stage, "private-lambda-role", var.random_string.id])
}

resource aws_iam_role_policy_attachment this {
  role       = aws_iam_role.this.name
  policy_arn = aws_iam_policy.this.arn
}&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;Public &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;Lambda&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;The second Lambda I made will consume the SQS queue. Notice the configuration does not include a VPC or Subnet configuration?This means the Lambda will be public within my account.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;## data

data archive_file this {
  type        = "zip"
  source_dir  = "${path.module}/src"
  output_path = "${path.module}/file.zip"
}

## resources

resource aws_lambda_function this {
  filename         = data.archive_file.this.output_path
  function_name    = join("-", [var.stage, var.name, "public-lambda", var.random_string.id])
  handler          = "index.lambda_handler"
  role             = aws_iam_role.this.arn
  runtime          = "python3.7"
  source_code_hash = data.archive_file.this.output_base64sha256

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "public-lambda", var.random_string.id])
      Tech = "Python_3_7"
      Srv  = "Lambda"
    },
    var.tags
  )
}

## IAM role, policies, and attachments

resource aws_iam_policy this {
  name   = join(var.delimiter, [var.name, var.stage, "public-lambda-policy", var.random_string.id])
  path   = "/"
  policy = file("${path.module}/iam/policy.json")
}

resource aws_iam_role this {
  assume_role_policy = file("${path.module}/iam/role.json")
  name               = join(var.delimiter, [var.name, var.stage, "public-lambda-role", var.random_string.id])
}

resource aws_iam_role_policy_attachment this {
  role       = aws_iam_role.this.name
  policy_arn = aws_iam_policy.this.arn
}

## Subscription to SQS queue

resource "aws_lambda_event_source_mapping" "example" {
  event_source_arn = var.aws_sqs_queue.arn
  function_name    = aws_lambda_function.this.arn
}&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;&lt;a href="https://docs.aws.amazon.com/general/latest/gr/rande.html" rel="noopener noreferrer"&gt;Service Endpoint&lt;/a&gt;&lt;/h4&gt;

&lt;p&gt;Here's the magic sauce! This Terraform resources connects a SQS Queue via an ENI into my VPC's private subnet. Now the VPC will be able to route the private Lambda's outbound HTTPS request to the SQS service. Even though the private Lambda has no apparent defined route to the public services.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;resource aws_vpc_endpoint sqs {
  private_dns_enabled = true
  service_name        = join(".", ["com.amazonaws", var.region, "sqs"])
  vpc_endpoint_type   = "Interface"
  vpc_id              = aws_vpc.this.id

  security_group_ids = [
    aws_security_group.private_lambda_0.id
  ]

  # Interface types get this. It connects the Endpoint to a subnet
  subnet_ids = [
    aws_subnet.private_0.id
  ] 

  tags = merge(
    {
      Name = join(var.delimiter, [var.name, var.stage, "service-endpoint-for-sqs", random_string.this.id])
      Tech = "Service Endpoint"
      Srv  = "VPC"
    },
    var.tags
  )
}

resource aws_vpc_endpoint_subnet_association sqs_assoc {
  subnet_id       = aws_subnet.private_0.id
  vpc_endpoint_id = aws_vpc_endpoint.sqs.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;&lt;a href="https://github.com/davidjeddy/aws_terraform_lambda_vpc_endpoint" rel="noopener noreferrer"&gt;Demo / Proof&lt;/a&gt;&lt;/h2&gt;

&lt;p&gt;Executing the Private Lambda with a test payload. Watching the logs I can see the private Lambda executes successfully. Checking the public Lambda I also see the payload from the private Lambda. It works!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2020%2F03%2Fimage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2020%2F03%2Fimage.png" alt=""&gt;&lt;/a&gt;CloudWatch logs of the &lt;strong&gt;private&lt;/strong&gt; Lambda invocation output. Notice the sqs.us-west-2.amazonaws.com:443 FQDN.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2020%2F03%2Fimage-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2020%2F03%2Fimage-1.png" alt=""&gt;&lt;/a&gt;CloudWatch logs output after the &lt;strong&gt;public&lt;/strong&gt; Lambda process the SQS queue message.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;While it may seem a little weird at first Service Endpoints are a great way to attach supported AWS services into a VPC's private subnet(s). It's secure, fast, cheap, and best of all easy to manage. &lt;/p&gt;

&lt;p&gt;Have you used Service Endpoints before? Do you have questions? Lets talk in the comments below.&lt;/p&gt;

&lt;h2&gt;Resources&lt;/h2&gt;

&lt;ul&gt;&lt;li&gt;Here is an the &lt;a href="https://github.com/davidjeddy/aws_terraform_lambda_vpc_endpoint" rel="noopener noreferrer"&gt;Example Terraform project&lt;/a&gt; on GiitHub.&lt;/li&gt;&lt;/ul&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How To: Database clustering with MariaDB and Galera.</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Wed, 27 Nov 2019 17:24:06 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/how-to-database-clustering-with-mariadb-and-galera-323g</link>
      <guid>https://dev.to/david_j_eddy/how-to-database-clustering-with-mariadb-and-galera-323g</guid>
      <description>&lt;h2&gt;The Situation&lt;/h2&gt;

&lt;p&gt;MariaDB, a fork of MySQL, has had multi-master clustering support from the the initial version 10 release. However, the more recent releases have made it increasingly easy to setup a multi-master database cluster. By &lt;code&gt;easy&lt;/code&gt; I mean it. But first, what is a &lt;code&gt;multi-master&lt;/code&gt; cluster?&lt;/p&gt;

&lt;p&gt;A multi-master cluster is one where each database instance is a &lt;code&gt;master&lt;/code&gt; of course. The cluster contains no read-replicas, slave nodes, or 2nd class instances. Every instance is a master. The up side is no lag, the down side every instance has to confirm writes. So, the big caveat here is that the network and throughput between all the instances needs to be as good as possible. The cluster performance is limited by the slowest machine.&lt;/p&gt;

&lt;h2&gt;Preflight Requirements&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linux.org/" rel="noopener noreferrer"&gt;Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Terminal#Software" rel="noopener noreferrer"&gt;CLI Terminal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;(optional) &lt;a href="https://www.terraform.io/downloads.html" rel="noopener noreferrer"&gt;Terraform 0.12.x&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AWS account:&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html" rel="noopener noreferrer"&gt;API key and secret&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.htm" rel="noopener noreferrer"&gt;PEM key provisioned for EC2 access&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;(optional) I put together a small project that starts three EC2 instances. Feel free to use this to start up the example environment resources.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;git clone https://github.com/davidjeddy/database_clustering_with_mariadb_and_galera.git
cd ./database_clustering_with_mariadb_and_galera

export AWS_ACCESS_KEY_ID=YOUR_API_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_API_SECRET_KEY
export AWS_PEM_KEY_NAME=NAME_OF_YOUR_PEM_KEY

terraform init
terraform plan -out plan.out -var 'key_name='${AWS_PEM_KEY_NAME}
terraform apply plan.out&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once completed the output should look like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

db-a-key = maria_with_galera
db-a-ssh = ec2-3-84-95-153.compute-1.amazonaws.com
db-b-key = maria_with_galera
db-b-ssh = ec2-3-95-187-84.compute-1.amazonaws.com
db-c-key = maria_with_galera
db-c-ssh = ec2-54-89-180-243.compute-1.amazonaws.com&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If that is what you get, we are ready to move on to the next part.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F11%2Ftaylor-vick-M5tzZtFCOfs-unsplash-2560x1437.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F11%2Ftaylor-vick-M5tzZtFCOfs-unsplash-2560x1437.jpg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@tvick?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Taylor Vick&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/data-center?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Setup&lt;/h2&gt;

&lt;p&gt;Now that we have three EC2 instances started up and running we can dig into the configuration for each database service. Open three new terminals; so in total we will have 4: localhost, DB-A, DB-B, DB-C. Using ssh log into the three database EC2 instances. After each login execution we should have something similar to the below.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ssh -i ~/.ssh/maria_with_galera.pem ubuntu@ec2-3-84-95-153.compute-1.amazonaws.com
The authenticity of host 'ec2-3-84-95-153.compute-1.amazonaws.com (3.84.95.153)' can't be established.
ECDSA key fingerprint is SHA256:rxmG0jtvI47tH3Yf3fAls9IsMPkho4DaRcSfA+NWNNs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-3-84-95-153.compute-1.amazonaws.com,3.84.95.153' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1054-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed Nov 27 16:07:48 UTC 2019

  System load:  0.0               Processes:           85
  Usage of /:   13.6% of 7.69GB   Users logged in:     0
  Memory usage: 30%               IP address for eth0: 172.31.40.213
  Swap usage:   0%

0 packages can be updated.
0 updates are security updates.



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo &amp;lt;command&amp;gt;".
See "man sudo_root" for details.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Take note of the &lt;code&gt;IP address for eth0&lt;/code&gt; on each instance. This is the private IP address that will be needed later. On each of the DB instances run the following commands to update the machine and install the MariaDB service and dependencies.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;sudo apt-get update -y
sudo apt-get install -y mariadb-server rsync&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The output this time is very long, but the ending should look like this.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;...
Created symlink /etc/systemd/system/mysql.service → /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/mysqld.service → /lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mariadb.service → /lib/systemd/system/mariadb.service.
Setting up mariadb-server (1:10.1.43-0ubuntu0.18.04.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.31) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for ureadahead (0.100.0-21) ...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To be extra sure we have everything installed, lets check the version of both MariaDB and rsync.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ubuntu@ip-172-31-40-213:~$ mysql --version &amp;amp;&amp;amp; rsync --version
mysql  Ver 15.1 Distrib 10.1.43-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
rsync  version 3.1.2  protocol version 31

...

are welcome to redistribute it under certain conditions.  See the GNU
General Public Licence for details.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Since we need to configure the clustering go ahead and stop the MariaDB service on each instance using the standard stop command.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;sudo systemctl stop mysql
sudo systemctl status mysql # never not always double check&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You may have noticed that the command is &lt;code&gt;mysql&lt;/code&gt; and not &lt;code&gt;mariadb&lt;/code&gt;. This is because MariaDB is a fork of MySQL and the MariaDB team wants to keep binary compatibility with MySQL. This helps projects migrate with the least amount of headache.&lt;/p&gt;

&lt;p&gt;Now do this same process on the  DB-B and DB-C instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fcharles-pjAH2Ax4uWk-unsplash-2160x1440.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fcharles-pjAH2Ax4uWk-unsplash-2160x1440.jpg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@charlesdeluvio?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Charles 🇵🇭&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/data-center?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Configurations&lt;/h2&gt;

&lt;p&gt;Here is where the magic happens! We are going to create a new configuration file for each node at the location &lt;code&gt;/etc/mysql/conf.d/galera.cnf&lt;/code&gt;. Open the file and add the following content. Where the configuration says [DB-A IP] replace with the PRIVATE IP address of that instance that we saw when we logged into each instance in the previous section. Also replace [DB-A NAME] with the name of the cluster node. &lt;code&gt;DB-A&lt;/code&gt;, &lt;code&gt;DB-B&lt;/code&gt;, or &lt;code&gt;DB-C&lt;/code&gt; depending on what EC2 instance the file is located on. &lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://[DB-A IP],[DB-B IP],[DB-C IP]"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_address="[DB-A IP]"
wsrep_node_name="[DB-A NAME]"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So, DB-A configuration should look like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://172.31.40.213,172.31.39.251,172.31.38.71"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_address="172.31.40.213"
wsrep_node_name="DBA"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;All three configurations should basically the same, minus the &lt;code&gt;node_address&lt;/code&gt; and &lt;code&gt;node_name&lt;/code&gt; being adjusted for each node.&lt;/p&gt;

&lt;h2&gt;Bringing It All Together&lt;/h2&gt;

&lt;p&gt;This is a very important step now; when starting the database on the first instance, aka &lt;code&gt;DB-A&lt;/code&gt;, we have to bootstrap the cluster. Since no other instances are running the boot strap process tells the database &lt;code&gt;hey, your the first one, chill out&lt;/code&gt; when it does not detect any other cluster members. After that though, &lt;code&gt;DB-B&lt;/code&gt; and &lt;code&gt;DB-C&lt;/code&gt; should join the cluster without an issue. So to start this first node use the following command on the DB-A instances.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ubuntu@ip-172-31-40-213:~$ sudo galera_new_cluster
ubuntu@ip-172-31-40-213:~$ sudo systemctl status mysql
● mariadb.service - MariaDB 10.1.43 database server
   Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-11-27 16:32:03 UTC; 5s ago
     Docs: man:mysqld(8)
           https://mariadb.com/kb/en/library/systemd/
...
Nov 27 16:32:03 ip-172-31-40-213 /etc/mysql/debian-start[5129]: Checking for insecure root accounts.
Nov 27 16:32:03 ip-172-31-40-213 /etc/mysql/debian-start[5133]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The important part here is the &lt;code&gt;Active: active (running)&lt;/code&gt;. Now, that we have the first cluster node running lets check the cluster status.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ubuntu@ip-172-31-40-213:~$ sudo mysql -u root -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster%';"
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 1                                    |
| wsrep_cluster_size       | 1                                    |
| wsrep_cluster_state_uuid | 71780aba-1133-11ea-a814-beaa932daf25 |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Hey; Check that out! We have a single instance cluster running. Awesome. Now we need to start DB-B and DB-C. Switch to each of those terminals and run the not bootstrapping command but instead the normal service start command.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ubuntu@ip-172-31-39-251:~$ sudo systemctl status mysql
● mariadb.service - MariaDB 10.1.43 database server
   Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-11-27 16:36:45 UTC; 11s ago
...
Nov 27 16:36:45 ip-172-31-39-251 /etc/mysql/debian-start[15042]: Checking for insecure root accounts.
Nov 27 16:36:45 ip-172-31-39-251 /etc/mysql/debian-start[15046]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Again the &lt;code&gt;Active: active (running)&lt;/code&gt; is the important part. Switch back to DB-A and run the global status check command like we did after starting the DB-A services.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ubuntu@ip-172-31-40-213:~$ sudo mysql -u root -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster%';"
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 3                                    |
| wsrep_cluster_size       | 3                                    |
| wsrep_cluster_state_uuid | 71780aba-1133-11ea-a814-beaa932daf25 |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Yea buddy! A three node database cluster up and running!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstephen-dawson-qwtCeJ5cLYs-unsplash-2001x1440.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstephen-dawson-qwtCeJ5cLYs-unsplash-2001x1440.jpg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@srd844?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Stephen Dawson&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/data-center?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Confirmation&lt;/h2&gt;

&lt;p&gt;To be super sure everything is running and replicating as expected lets execute a few SQL commands to change the state of the database and then check the new state. On &lt;code&gt;DB-A&lt;/code&gt; lets add a new schema and table with a data point.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;sudo mysql -u root -e "CREATE DATABASE testing; CREATE TABLE testing.table1 (id int null);INSERT INTO testing.table1 SET id = 1;"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now let's do a select statement on &lt;code&gt;DB-C&lt;/code&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ubuntu@ip-172-31-38-71:~$ sudo mysql -u root -e "SELECT * FROM testing.table1;"
+------+
| id   |
+------+
|    1 |
+------+&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;YES! The new schema, table, and data replicated from &lt;code&gt;DB-A&lt;/code&gt; to &lt;code&gt;DB-C&lt;/code&gt;. We can run the select command on &lt;code&gt;DB-B&lt;/code&gt; and see the same result! We can write to &lt;code&gt;DB-C&lt;/code&gt; and see it replicated on &lt;code&gt;DB-A&lt;/code&gt; and &lt;code&gt;DB-B&lt;/code&gt;. Each node takes reads and writes then replicates the changes to all the other nodes!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Boom&lt;/strong&gt;! A three node multi-master database cluster up and running! Log into one of the instances (does not matter since this is a multi-master) and create a new schema. Then exit and check the status of the cluster again. See the state value change? Yea, replication at work!&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This is just the tip the functionality iceberg that is database clustering. I have had to skip over a very large number of topics like replication lag, placement geography, read-only replicas, bin_log format, and so much more. But this gives you a solid introduction to the concept of database clustering. Have fun!&lt;/p&gt;

&lt;h2&gt;Additional Resources&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mariadb.org/" rel="noopener noreferrer"&gt;MariaDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ndimensionz.com/kb/what-is-database-clustering-introduction-and-brief-explanation/" rel="noopener noreferrer"&gt;Database Clustering Concepts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>howto</category>
      <category>database</category>
      <category>mariadb</category>
      <category>clustering</category>
    </item>
    <item>
      <title>How to: Delete a stubborn Kubernetes namespaces.</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Wed, 25 Sep 2019 22:41:10 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/how-to-delete-a-stubborn-kubernetes-namespaces-2841</link>
      <guid>https://dev.to/david_j_eddy/how-to-delete-a-stubborn-kubernetes-namespaces-2841</guid>
      <description>&lt;p&gt;&lt;a href="https://blog.davidjeddy.com/" rel="noopener noreferrer"&gt;First posted on my blog.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;The Situation&lt;/h2&gt;

&lt;p&gt;Recently I was introduced to a new project. The getting started documentation was decent and I was able to get the project started up. When it came time to clear the project out of my local machines Kubernetes cluster however, the same space for to the &lt;code&gt;terminating&lt;/code&gt; life cycle phase and stayed there for days. Fast forward a couple of weeks and I notice it there, still terminating. Turns out the project had a &lt;code&gt;finalizer&lt;/code&gt; that was not responding back. Think of remote service call. In this case the remote service never responded. So below is a little script I put together to &lt;code&gt;force&lt;/code&gt; a namespace deletion.&lt;/p&gt;

&lt;h2&gt;The Code&lt;/h2&gt;

&lt;pre&gt;&lt;code&gt;#!/bin/bash
k8s_delete_ns=$1
echo "Provided namesapce: ${k8s_delete_ns}..."
echo "Exporting namespace configuration..."
kubectl get namespaces -o json | grep "${k8s_delete_ns}"
kubectl get namespace ${k8s_delete_ns} -o json &amp;gt; temp.json
echo "Opening editor..."
wait 3
vi temp.json
echo "Sending configuration to k8s master for processing..."
curl -H "Content-Type: application/json" -X PUT --data-binary @temp.json http://127.0.0.1:8080/api/v1/namespaces/${k8s_delete_ns}/finalize
echo "Waiting for namespace deletion to process..."
wait 12
kubectl get namespaces
echo "...done."&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So what does this do? Lets remove the shell, echo and wait statements since we know the machine does not really do anything with those.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;k8s_delete_ns=$1
kubectl get namespaces -o json | grep "${k8s_delete_ns}"
kubectl get namespace ${k8s_delete_ns} -o json &amp;gt; temp.json
vi temp.json
curl -H "Content-Type: application/json" -X PUT --data-binary @temp.json http://127.0.0.1:8080/api/v1/namespaces/${k8s_delete_ns}/finalize
kubectl get namespaces&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;So what does this do exactly?&lt;/h2&gt;

&lt;p&gt;Ok, so what do we have here. Line one takes the first argument from the command invocation and assigns it to a local variable. &lt;code&gt;./script.sh stuck_namespace&lt;/code&gt;. So &lt;code&gt;stuck_namespace&lt;/code&gt; is the value that is assigned to &lt;code&gt;k8s_delete&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The next two commands get's the current namespace configuration and writes it to &lt;code&gt;temp.json&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The fourth command opens the &lt;code&gt;temp.json&lt;/code&gt; configuration file using &lt;code&gt;vi&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With the configuration open we want to remove any items listed in the &lt;code&gt;finalizers&lt;/code&gt; array. This is the real magic moment here. Removing the finalizers with allow kubectl to remove the namespace.&lt;/p&gt;

&lt;p&gt;Speaking of the next command uses &lt;code&gt;curl&lt;/code&gt; to pass the newly edited configuration to kubectl. This updates the namespace configuration with the config that does not contain finalizers.&lt;/p&gt;

&lt;p&gt;And finally, &lt;code&gt;get namespaces&lt;/code&gt; will output the existing namespace after the update. If all went as expected the list will not contain the stuck namespace.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;So there it is. There are a number of solutions out floating around the internet but the release cadence of Kubernetes renders some of then out-dated in a matter of months. So let me state this: this solution works for Kubernetes 1.13. Other than that, I hope you it helps you out  as it helped me.&lt;/p&gt;

&lt;p&gt;P.S.: If you would like to keep up with Kubernetes helpers I make, checkout the repo on &lt;a href="https://github.com/davidjeddy/k8s_helper_scripts" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>operations</category>
      <category>howto</category>
    </item>
    <item>
      <title>Trial: 30 days with VueJs</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Wed, 14 Aug 2019 15:30:06 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/trial-30-days-with-vuejs-59g4</link>
      <guid>https://dev.to/david_j_eddy/trial-30-days-with-vuejs-59g4</guid>
      <description>&lt;p&gt;Given my roles over the last year plus writing code has be relegated to IaC, bash, or pipeline automation.s While nice that I still get to write logic the hunger to create something usable by people still nags in the back of my head. Given that the majority of the past decade or so has been server side; technologies like React, VueJS, Angular passed me by. Not that this is a problem, &lt;code&gt;frontend&lt;/code&gt; never really interested me personally. Due mainly to the early 2000s when a dev had to write for IE AND Firefox, desperately. I hate repeating code just for one vendor.&lt;/p&gt;

&lt;p&gt;As such, I have been listening / watching VueJS courses during my free time. As many of you know I am also studying for certifications, so the VueJS time has been limited. In the last 30 days I would ballpark 40 hours worth of effort has gone into studying VueJS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2018%2F10%2Fce20a629653699.55fd4eb575bfe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2018%2F10%2Fce20a629653699.55fd4eb575bfe.jpg" alt="Everyone likes diagrams..."&gt;&lt;/a&gt;If your interested, this is key usage of a keyboard.&lt;br&gt;Source &lt;a href="http://chickart.de/portfolio-item/bachelorarbeit-visualisierung-der-tastaturnutzung-im-verlauf-der-bachelorarbeit/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="http://chickart.de/portfolio-item/bachelorarbeit-visualisierung-der-tastaturnutzung-im-verlauf-der-bachelorarbeit/" rel="noopener noreferrer"&gt;http://chickart.de/portfolio-item/bachelorarbeit-visualisierung-der-tastaturnutzung-im-verlauf-der-bachelorarbeit/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Why VueJS? (the good)&lt;/h2&gt;

&lt;p&gt;VueJS has a number of points going for itself.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No JSX&lt;/li&gt;
&lt;li&gt;Not owned by FAANG (Facebook, Amazon, Apple, Netflix, Google)&lt;/li&gt;
&lt;li&gt;active and engaged community&lt;/li&gt;
&lt;li&gt;Performance,  small size,  include only what is required in build&lt;/li&gt;
&lt;li&gt;Minimal setup dev / prod environments&lt;/li&gt;
&lt;li&gt;IE 11 not support, no added baggage for a dated and dead browser&lt;/li&gt;
&lt;li&gt;Large UI library options&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;What, really!? (the bad)&lt;/h2&gt;

&lt;p&gt;While VueJS has it's good parts, nothing is perfect. One of the biggest pains out of the box is var data accessibility. Functions calling functions calling properties calling functions. Just to pass an atomic data to a sibling component. (To be fair many front end frameworks suffers this same access problem.)&lt;/p&gt;

&lt;h2&gt;Resources&lt;/h2&gt;

&lt;p&gt;Given the limited amount of time and attention  I have been able to direct toward learning VueJS it was important for me to get the most bang for the attention minute buck. Here are some resources that really hit the spot.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.udemy.com/vuejs-2-the-complete-guide" rel="noopener noreferrer"&gt;https://www.udemy.com/vuejs-2-the-complete-guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vuejs.org/v2/guide/" rel="noopener noreferrer"&gt;VueJS Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vuejs.org/v2/examples/" rel="noopener noreferrer"&gt;VueJS Examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/"&gt;Dev.to community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/quick-code/top-tutorials-to-learn-vue-js-for-beginners-6c693e41091d" rel="noopener noreferrer"&gt;Top Tutorials To Learn Vue Js For Beginners&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://laracasts.com/series/learn-vue-2-step-by-step" rel="noopener noreferrer"&gt;Laracasts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Javascript, maybe not so bad...&lt;/p&gt;

&lt;h2&gt;Results&lt;/h2&gt;

&lt;p&gt;After maybe 40 hours of attention and effort some hands on practice and exposure to the community I think VueJS is worth looking at. It is flexible but not disorderly, powerful but not overwhelming complex, popular but not smothering. If can be included in nearly any standard web or native application as a part or whole, mobile native VueJS apps anyone? To round it out VueJS is performant and is on the adoption upswing.&lt;/p&gt;

&lt;p&gt;Would I trade it for another option if the other option is in place and working? No, of course not. Would I pick VueJS for a new feature or project if given the chance? Yes, yes I would.&lt;/p&gt;

</description>
      <category>vue</category>
      <category>discuss</category>
      <category>opinion</category>
      <category>trial</category>
    </item>
    <item>
      <title>Intro: k3s, a less needy Kubernetes,</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Thu, 01 Aug 2019 13:08:39 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/intro-k3s-a-less-needy-kubernetes-3e53</link>
      <guid>https://dev.to/david_j_eddy/intro-k3s-a-less-needy-kubernetes-3e53</guid>
      <description>&lt;h2&gt;What is all this?&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; states "...&lt;code&gt;k3s&lt;/code&gt; is a intended to be a fully compliant production-grade Kubernetes distribution with [the following] changes...". But what does that &lt;code&gt;mean&lt;/code&gt; in layman's terms? It means Kubernetes has a lot of functionality that is not 100% required  in the use case of IoT, Edge computing, or lower powered hardware. For example the &lt;code&gt;kubeadm&lt;/code&gt; has some high hardware requirements. and runs very slowly on lower power / high latency hardware. Taking this further running &lt;code&gt;k8s&lt;/code&gt; worker nodes on an ARM chip is a practice in frustration. Enter &lt;code&gt;k3s&lt;/code&gt;, lower hardware is no longer a hard barrier for entry.&lt;/p&gt;

&lt;h2&gt;Up and Running&lt;/h2&gt;

&lt;p&gt;Installation is very straight forward. I would say &lt;em&gt;almost&lt;/em&gt; as easy as &lt;code&gt;apt-get install&lt;/code&gt;. Since &lt;code&gt;k3s&lt;/code&gt; is aimed at Edge and IoT Debian is rarely the chosen OS. So instead it uses &lt;code&gt;curl&lt;/code&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;curl -sfL https://get.k3s.io | sh -&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Execute the above in a terminal, the output should look similar to the following.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[INFO]  Finding latest release
[INFO]  Using v0.7.0 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.7.0/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.7.0/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Running &lt;code&gt;ps aux | grep k3s&lt;/code&gt; returns our query with confirmation that k3s is indeed running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F08%2Fjoey-kyber-45FJgZMXCK8-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F08%2Fjoey-kyber-45FJgZMXCK8-unsplash.jpg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@jtkyber1?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Joey Kyber&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/speed-light?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Subsequent start ups of the master node is accomplished via an event shorter command.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;echo 'Run the k3s server...'
sudo k3s server &amp;amp;
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
sudo k3s kubectl get node&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If the Rancher team was going for &lt;code&gt;shortest commands possible&lt;/code&gt;, they win.&lt;/p&gt;

&lt;p&gt;Now on to worker nodes. To join a worker node to a cluster it is also nearly as easy.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;echo 'Run a k3s worker node...'
# On a worker node, run the following.
# NODE_TOKEN is on the server @ /var/lib/rancher/k3s/server/node-token
sudo k3s agent --server https://master_node:6443 --token ${NODE_TOKEN}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That is it. Three different commands to get a master and worker nodes installed, running, and joined together. Who said Kubernetes was confusing. (I am joking., k8s can be very confusing.)&lt;/p&gt;

&lt;h2&gt;Any Example Usage Case&lt;/h2&gt;

&lt;p&gt;As I wrote this posting an colleague of mine reached out to me asking how I would put together a system that could monitor the status of and receive data from a wide range of in-situ IoT devices for agriculture. The system needs to be able to tell what devices are online, the health of the system as well as receive data streams from the system. Once the data ingested and analyzed alerts and reports would be generated. The in-situ devices would be as lower-powered and very isolated, likely run off solar with a local battery pack. During the &lt;code&gt;proof of concept&lt;/code&gt; phase Raspberry Pi 3's with custom enclosures would be the deployed IoT devices. Bam! Perfect case for &lt;code&gt;k3s&lt;/code&gt;. When the device starts the startup script would instruct the device to join the cluster via a master node running on a ARM server (probably a AWS EKS master really). Then machine and sensor data could be feed into Kinesis Firehose to analysis. Bam, done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F08%2Fzan-ilic-wGqz5YSqsfk-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2F2019%2F08%2Fzan-ilic-wGqz5YSqsfk-unsplash.jpg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@zanilic?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Zan Ilic&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/iot?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Wrap Up&lt;/h2&gt;

&lt;p&gt;I can hear you saying 'so soon'? Well...yes. k3s really is that easy. One command to install, one command to start, one command to join. I would even dare to say it is easier to operate than Docker.&lt;/p&gt;

&lt;p&gt;What do you think. Does Kubernetes have a place with Edge and IoT devices; or is it overkill like technology engineers so often tend to do? Let me know in the comments below.&lt;/p&gt;

&lt;h2&gt;Additional Reading&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;https://k3s.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rancher/k3s" rel="noopener noreferrer"&gt;https://github.com/rancher/k3s&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://rancher.com/blog/2019/2019-02-26-introducing-k3s-the-lightweight-kubernetes-distribution-built-for-the-edge/" rel="noopener noreferrer"&gt;https://rancher.com/blog/2019/2019-02-26-introducing-k3s-the-lightweight-kubernetes-distribution-built-for-the-edge/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://rancher.com/tags/k3s/" rel="noopener noreferrer"&gt;https://rancher.com/tags/k3s/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/" rel="noopener noreferrer"&gt;https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>iot</category>
      <category>edge</category>
    </item>
    <item>
      <title>Intro: Hashicorp `Packer`</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Tue, 30 Jul 2019 23:28:34 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/intro-hashicorp-packer-la9</link>
      <guid>https://dev.to/david_j_eddy/intro-hashicorp-packer-la9</guid>
      <description>&lt;p&gt;&lt;strong&gt;The problem space...&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In today's modern application development process an applications has many homes. Local development, (docker) containers, on premise Linux servers, maybe even Unix mainframes for production. Often an application is developed using a container then deployed to an cloud compute instance that is 'build like' the container. Then someone somewhere has to move the application to production and ensure it still works correctly. Boiling this problems down to the core we see it is a problem of creating machine images that match desired state. Docker container? machine image. On premise Linux integration server? machine image. Production cloud host? again, machine image. So exactly how are we suppose to create matching machine images over such a wide range of under lying systems?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hashi-who pack-what-?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.hashicorp.com/" rel="noopener noreferrer"&gt;Hashicorp&lt;/a&gt; is a development and management tool publisher. Most famous there Infrastructure as Code (IaC) tool &lt;a href="https://www.hashicorp.com/products/terraform/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;. &lt;a href="https://www.packer.io/docs/index.html" rel="noopener noreferrer"&gt;Packer&lt;/a&gt; is one of the tools available from them. Created specifically to ease the creation of machine images, Packer is super easy to learn and has a very low bearer of entry. Container, Linux Machines and even Virtual Box and VMWare images can all be created with Packer. In one solid day and you can publishing machine images like a pro!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5UiQiZn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/2019/07/brett-jordan-MFLNpz5FZRk-unsplash-1920x1440.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5UiQiZn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/2019/07/brett-jordan-MFLNpz5FZRk-unsplash-1920x1440.jpg" alt="" width="800" height="600"&gt;&lt;/a&gt;Image from Brett Jordan @ Unsplash: &lt;a href="https://unsplash.com/@brett_jordan" rel="noopener noreferrer"&gt;https://unsplash.com/@brett_jordan&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning Curve&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As stated above the learning curve for &lt;a href="https://www.packer.io/" rel="noopener noreferrer"&gt;Packer&lt;/a&gt; takes one solid day. The &lt;code&gt;packer help&lt;/code&gt; CLI command returns 6 options. 6! &lt;code&gt;build&lt;/code&gt;, &lt;code&gt;console&lt;/code&gt;, &lt;code&gt;fix&lt;/code&gt;, &lt;code&gt;inspect&lt;/code&gt;, &lt;code&gt;validate&lt;/code&gt;, and &lt;code&gt;version&lt;/code&gt;. Pretty self explanatory right?! Try that with any other CLI application and scrolling up and down becomes a way of life. Moving on to the configuration files, they are plan text JSON, not even HCL, just plain JSON. Inside the configuration files are what I like to call three 'top level concepts':&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.packer.io/docs/builders/index.html" rel="noopener noreferrer"&gt;Builders&lt;/a&gt;: Who is building the machine image? Think of this similar to the &lt;code&gt;build&lt;/code&gt; stage in a CI/CD pipeline; but much more versatile. AWS, Docker, GCP, 1&amp;amp;1, OpenStack, Oracle, VMWare, and even custom builds are available. Checkout the &lt;a href="https://www.packer.io/docs/builders/index.html" rel="noopener noreferrer"&gt;complete list to see over on the docs&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.packer.io/docs/provisioners/index.html" rel="noopener noreferrer"&gt;Provisioners&lt;/a&gt;: This part of the configuration determines what is used to install dependencies, update the core OS, create users, set file permissions, and other configuration processes INSIDE the image.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.packer.io/docs/post-processors/index.html" rel="noopener noreferrer"&gt;Post-Processors&lt;/a&gt; (optional): When these are run after the machine image is created additional commands can be executed. Upload the image for storage to an artifact repository, re-package a a VM from VMWare to VirtualBox, run build time reports, and other post event actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is it; three main concepts, and one of them is optional!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Build a local Docker image and push to image repository&lt;/em&gt;.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"builders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"commit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ubuntu:16.04"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"provisioners"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"inline"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"apt-get update -y &amp;amp;amp;&amp;amp;amp; apt-get install -y python python-dev"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"post-processors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"repository"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"example-ubuntu-16.04-updated"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"tag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker-tag"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above Packer configuration Packer pulls the Ubuntu 16.04 Docker image from Docker Hub via the &lt;code&gt;builders&lt;/code&gt; section. Followed by the &lt;code&gt;shell&lt;/code&gt; provisioner to update the system and install the Python; now it is ready for you Flask or Django app! Finally, the last part, the &lt;code&gt;post-processor&lt;/code&gt; adds a tag to the image and pushes to your image repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IG_h6REL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/2019/07/chuttersnap-xewrfLD8emE-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IG_h6REL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/2019/07/chuttersnap-xewrfLD8emE-unsplash.jpg" alt="" width="800" height="534"&gt;&lt;/a&gt;Photo by &lt;a href="https://unsplash.com/@chuttersnap?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;chuttersnap&lt;/a&gt; on &lt;a href="https://unsplash.com/search/photos/containers?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;B&lt;em&gt;uilding an AWS AMI, Updating the OS, and saving the image as an AMI.&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"variables"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"aws_access_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"aws_secret_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"builders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"amazon-ebs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"access_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{{user `aws_access_key`}}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"secret_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{{user `aws_secret_key`}}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"source_ami_filter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"filters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"virtualization-type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"root-device-type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ebs"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"owners"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"most_recent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"instance_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ssh_username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ami_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"packer-aws-ami-{{timestamp}}"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"provisioners"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"shell"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"inline"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"sleep 30"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"sudo apt-get update"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"apt-get install mysql-server libmysqlclient-dev"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the configuration above we use local AWS credentials to build an image based on Ubuntu 16.04. In the improvisers section the OS is updated and MySQL-server is installed using the &lt;code&gt;shell&lt;/code&gt; provisioner. That is all. You now have a EC2 based MySQL server image.&lt;/p&gt;

&lt;p&gt;This is just two simple examples of what Packer can do. Imagine it as part of your CI/CD pipeline! It is even possible to build different images for different targets with the same provisioner execution on each, at the same time! &lt;a href="https://www.packer.io/intro/getting-started/parallel-builds.html" rel="noopener noreferrer"&gt;Parallel builds&lt;/a&gt; are an amazing advanced feature to look into as you dig into the full feature set.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All told Packer fits a nice niche in the build process: creating the underlying machine image. From there a provisioner like Shell scripts, Ansible, or PowerShell pick up and execute custom application specific commands. A fast, easy to understand, amazingly simple (and REPEATABLE) way to configure those sweet sweet &lt;code&gt;golden images&lt;/code&gt;. Now there is no reason for base images to be out of date an unpatched.&lt;/p&gt;

&lt;p&gt;So what do you think? Can we see what Packer fits into your daily build cycle, or even simplify an effort intensive process of creating images? Let me know your thoughts in the comments below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Further Reading&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.packer.io/" rel="noopener noreferrer"&gt;https://www.packer.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.packer.io/intro/index.html" rel="noopener noreferrer"&gt;https://www.packer.io/intro/index.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devopscube.com/packer-tutorial-for-beginners/" rel="noopener noreferrer"&gt;https://devopscube.com/packer-tutorial-for-beginners/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.codeship.com/packer-vagrant-tutorial/" rel="noopener noreferrer"&gt;https://blog.codeship.com/packer-vagrant-tutorial/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://semaphoreci.com/community/tutorials/continuous-deployment-of-golden-images-with-packer-and-semaphore" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://semaphoreci.com/community/tutorials/continuous-deployment-of-golden-images-with-packer-and-semaphore" rel="noopener noreferrer"&gt;https://semaphoreci.com/community/tutorials/continuous-deployment-of-golden-images-with-packer-and-semaphore&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>automation</category>
      <category>hashicorp</category>
      <category>packer</category>
      <category>devops</category>
    </item>
    <item>
      <title>What do you prefer: general all-in-one tools or focused single domain tools?</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Sun, 28 Jul 2019 15:54:54 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/what-do-you-prefer-general-all-in-one-tools-or-focused-single-domain-tools-4m71</link>
      <guid>https://dev.to/david_j_eddy/what-do-you-prefer-general-all-in-one-tools-or-focused-single-domain-tools-4m71</guid>
      <description>&lt;p&gt;While refactoring an application recently I was thinking about its build pipelines and the tools available to modern app dev teams. Docker was heralded as the world savor (like every new tech is, but that is a different conversation); however the reality is multi GB images, version sprawl, and a lack luster meta-data system, and a security model that has been the bane of security teams for the past 5 years. It leaves a person longer for the &lt;code&gt;simpler times&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It got me thinking, is containerizing the applications run time worth it? If we applying the software concept of &lt;em&gt;single responsibility&lt;/em&gt; to the build process; each step would have a tool best suited for one domain of responsibility. The counter argument being "yet another tools to learn". But, honestly, everything is YAML configuration these days so how hard would it really be.&lt;/p&gt;

&lt;p&gt;So the question: Do you prefer small, separate, specific tooling or one tool that &lt;em&gt;does it all&lt;/em&gt; to an acceptable level? What is the justification for you choice?&lt;/p&gt;




&lt;p&gt;Cover image by &lt;a href="https://unsplash.com/photos/C7B-ExXpOIE" rel="noopener noreferrer"&gt;https://unsplash.com/@soymeraki&lt;/a&gt;&lt;/p&gt;

</description>
      <category>bestpractices</category>
      <category>discuss</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Whats your daily tool chain these days?</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Mon, 15 Jul 2019 13:24:11 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/whats-your-daily-tool-chain-these-days-2boj</link>
      <guid>https://dev.to/david_j_eddy/whats-your-daily-tool-chain-these-days-2boj</guid>
      <description>&lt;p&gt;What are the tools you use daily today? Be as detailed or specific as you want.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;AWS&lt;/li&gt;
&lt;li&gt;Containerization (Docker mainly)&lt;/li&gt;
&lt;li&gt;GCP&lt;/li&gt;
&lt;li&gt;Kubernetes&lt;/li&gt;
&lt;li&gt;Linux (Debian / Ubuntu) &amp;amp; related CLI tools&lt;/li&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;VS Code&lt;/li&gt;
&lt;li&gt;Web browser (Chrome, Firefox)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Tagging with Terraform</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Tue, 25 Jun 2019 12:08:04 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/tagging-with-terraform-5hn3</link>
      <guid>https://dev.to/david_j_eddy/tagging-with-terraform-5hn3</guid>
      <description>&lt;p&gt;&lt;a href="https://blog.davidjeddy.com/2019/06/25/tagging-with-terraform/" rel="noopener noreferrer"&gt;Sourced from my blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Using cloud resources can be accelerating to the business, liberating to the engineering teams, and expensive to bank account. If you have every left a large compute or database instance running over a weekend (or accidentally committed API keys to GitHub) it is easy to experience a large increase in operating costs for the month. Using the resource metadata &lt;strong&gt;tagging&lt;/strong&gt; functionality responsible parties(1) can audit, track, and manage resources. It is even possible to enact automatic based on tag values (or lack thereof).&lt;/p&gt;

&lt;h2&gt;Basic Usage&lt;/h2&gt;

&lt;p&gt;The basic tagging structure in Terraform is straightforward enough. If you are familiar with the JSON data interchange format you should recognize this immediately. It is a Java inspired object declaration with quoted key values pairs using a colon as separator and a comma as a delimiter. Yeay, another DSL to learn. #welcometowebdev&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;...
"aws_resource_type" "aws_resource" {
    ...
    tags {
        "key": "value",
        "Name": "Value",
        "department": "engineering",
        "team": "core_api",
        "app": "name",
        "env": "dev"
        ...
    }
    ...
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Nothing to difficult about that. Most (not all) resources in AWS support tagging in this manner. Straight forward JSON styled key/value pairs. Terraform even allows us to use variables in place of values, but not keys(2).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2FChad_Hagan_Art_and_Design%2FThe-New-Yorker%2FGonnorea_final_New_Yorker_f-1440x1440.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2FChad_Hagan_Art_and_Design%2FThe-New-Yorker%2FGonnorea_final_New_Yorker_f-1440x1440.jpg" alt=""&gt;&lt;/a&gt;Weird..&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;...
"aws_resource_type" "aws_resource" {
    ...
    tags {
        "key": "value",
        "Name": "Value",
        "department": "${var.dept_name",
        "team": "${var.team_name",
        "app": "${var.app_name",
        "env": "${var.app_env}"
        ...
    }
    ...
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;"But David" you will say "application infrastructure can be very complicated. Will I have to copy / paste the 'tag' attribute all over the place?" Short answer; No. Long Answer: ...&lt;/p&gt;

&lt;h2&gt;Advanced Usage&lt;/h2&gt;

&lt;p&gt;Using some Terraform-fu we can assign the default sets of key/value pairs to a map type variable (local or imported) and that variable as the default tags data set. The format changes slightly between a JSON object to a Terraform (HCL) map data type; but not by much.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;variable "default_tags" { 
    type = "map" 
    default = { 
        key: "value",
        Name: "Value",
        department: "${var.dept_name",
        team: "${var.team_name",
        app: "${var.app_name",
        env: "${var.app_env}"
  } 
}
...&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;With the tags abstracted as a map the tag attribute for the resource is linimized to a one liner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2FChad_Hagan_Art_and_Design%2FIEEE%2FIEEE__Complicated_Fin.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.davidjeddy.com%2Fwp-content%2Fuploads%2FChad_Hagan_Art_and_Design%2FIEEE%2FIEEE__Complicated_Fin.jpg" alt=""&gt;&lt;/a&gt;Electrical circuits or IaC diagram? You decide!&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;...
"aws_resource_type" "aws_resource" {
    ...
    tags = var.default_tags
    ...
}
...&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The power level up is to merging the default tags map variable with custom inline tags. We for this we get fancy and start using the TF merge(). Providing the  default tag map variable as one function argument and the custom tags as a second map type function argument we get to use both the default provided tags AND custom inline tags! Boom, Magic!&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;...
"aws_resource_type" "aws_resource" {
    ...
    tags = "${merge(map( 
            "Special_Key", "some special value", 
            "Special_Key_2", "some other special value",
            ...
        ), var.default_tags)}"
    ...
}
...&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Wrap Up&lt;/h2&gt;

&lt;p&gt;Hashicorp continues to improve Terraform with each release. Say what you will about HCL being nearly JSON but with additional functionality; it is a powerful and featurful tool set to manage a projects infrastructure. With an appropriate  tagging strategy it also becomes a powerful way to track that infrastructure as well. &lt;/p&gt;

&lt;h2&gt;Notes&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I did &lt;strong&gt;not&lt;/strong&gt; say "managers" on purpose. In a true DevOps environment every developer, operator,  SysAdmin, on every team within every technology value stream is should be aware of how and responsible for the system as a whole. Share the burden. No silos.&lt;/li&gt;
&lt;li&gt;Using map() it is possible to use a variable as a key. This is because map() evaluated the variable "key" before returning.&lt;/li&gt;
&lt;li&gt;This concept weirded me out at out first. But after thinking about the mechanics it makes sense. Why would the last operation in a function be on a variable NOT being returned?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Additional Reading&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://weidongzhou.wordpress.com/2018/12/13/how-to-do-tagging-efficiently-in-terraform/" rel="noopener noreferrer"&gt;https://weidongzhou.wordpress.com/2018/12/13/how-to-do-tagging-efficiently-in-terraform/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://groups.google.com/forum/#!topic/terraform-tool/1yjotodsBog" rel="noopener noreferrer"&gt;https://groups.google.com/forum/#!topic/terraform-tool/1yjotodsBog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html" rel="noopener noreferrer"&gt;https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.scottlowe.org/2018/06/11/using-variables-in-aws-tags-with-terraform/" rel="noopener noreferrer"&gt;https://blog.scottlowe.org/2018/06/11/using-variables-in-aws-tags-with-terraform/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.scottlowe.org/2018/06/11/using-variables-in-aws-tags-with-terraform/" rel="noopener noreferrer"&gt;https://blog.scottlowe.org/2018/06/11/using-variables-in-aws-tags-with-terraform/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.terraform.io/docs/configuration/locals.html" rel="noopener noreferrer"&gt;https://www.terraform.io/docs/configuration/locals.html&lt;/a&gt; (0.12+)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>howto</category>
    </item>
    <item>
      <title>10 years of web development; 10 life lessons.</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Fri, 14 Jun 2019 14:37:52 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/10-years-of-web-development-10-life-lessons-11i6</link>
      <guid>https://dev.to/david_j_eddy/10-years-of-web-development-10-life-lessons-11i6</guid>
      <description>&lt;p&gt;Soured from &lt;a href="https://blog.davidjeddy.com" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ten years ago (2009) the economic recession was in full swing. Every week another bank would collapse, millions of homes would go into foreclosure proceedings, and I started my career as a professional web developer. With a degree in hand and a little hobby experience, I set out into the cruel, cruel job market.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HwzS3ulD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/Automotive_Engines/transmission-new.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HwzS3ulD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/Automotive_Engines/transmission-new.jpg" alt="" width="800" height="449"&gt;&lt;/a&gt;Anyone know what this is?&lt;/p&gt;

&lt;p&gt;It took nearly 6 months to find my first role as a 'jr web developer'. Later finding out I was the only candidate that could create a working submit form. Yea, 4 year degree to show off something I knew _before_ university. Soon, I'll be a Infra-Engineer for a payment processor after spending a couple years in the SRE/DevOps consulting realm. Here are the 10 lessons I have learned over the last 10 years the in information technology.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge is a resource best shared.&lt;/li&gt;
&lt;li&gt;The end user is always correct; conversely, the end user is never correct. &lt;/li&gt;
&lt;li&gt;The end user will always find a way to use your program differently than intended. &lt;/li&gt;
&lt;li&gt;If you know more about a subject than everyone else in the room, you are the expert.&lt;/li&gt;
&lt;li&gt;You are your own best champion. No one knows your strengths and weaknesses better than you. Put yourself where you want to be.&lt;/li&gt;
&lt;li&gt;Learn something new every day; the results are compounding.&lt;/li&gt;
&lt;li&gt;Break complex systems into small single focus pieces. Systems are easier to understand, manipulate, and iterate as small pieces.&lt;/li&gt;
&lt;li&gt;Ignore the imposter syndrome feelings, no one knows everything about everything.&lt;/li&gt;
&lt;li&gt;Everything is an abstraction. Understand the low level means you automatically understand 1/2 the higher level abstractions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--flpbAPFW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/Chad_Hagan_Art_and_Design/Broad-Institute/Assemblage_crop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--flpbAPFW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.davidjeddy.com/wp-content/uploads/Chad_Hagan_Art_and_Design/Broad-Institute/Assemblage_crop.png" alt="" width="730" height="945"&gt;&lt;/a&gt;So many things, so complicated.&lt;/p&gt;

&lt;p&gt;So there you go. Successes, failures, hard won battles, and easily lost fights. Distilled down to 10 learned lessons. So how about you, any lessons you have learned in the past decade you can share?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://unsplash.com/photos/5NM32v14n6M" rel="noopener noreferrer"&gt;Photo by Matthew Fournier on Unsplash.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>career</category>
      <category>learning</category>
      <category>reflection</category>
    </item>
    <item>
      <title>FIX: Terraform + AWS: InvalidVPCNetworkStateFault</title>
      <dc:creator>David J Eddy</dc:creator>
      <pubDate>Sun, 02 Jun 2019 15:30:00 +0000</pubDate>
      <link>https://dev.to/david_j_eddy/fix-terraform-aws-invalidvpcnetworkstatefault-4c4f</link>
      <guid>https://dev.to/david_j_eddy/fix-terraform-aws-invalidvpcnetworkstatefault-4c4f</guid>
      <description>&lt;p&gt;While working with Terraform and AWS recently I ran into an error that did not seem to have much information about it. After about a day of research and troubleshooting I was able to solve it. &lt;/p&gt;

&lt;h2&gt;The Error&lt;/h2&gt;

&lt;pre&gt;&lt;code&gt;Error: Error applying plan:

1 error(s) occurred:

* module.web_app.aws_db_instance.rds: 1 error(s) occurred:

* aws_db_instance.rds: Error creating DB Instance: InvalidVPCNetworkStateFault: Cannot create a db.t2.micro database instance because no subnets exist in availability zones with sufficient capacity for VPC and storage type : gp2 for db.t2.micro. Please first create at least one new subnet; choose from these availability zones: us-west-1c, us-west-1b.

    status code: 400, request id: ea5f04be-8510-4cfc-9bb2-606c0e00d007
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The key takeaways here are RDS, subnets, and availability. So I checked the VPC AZ's, the subnets assigned to them, CIDR ranges, etc. At one point I even compared the VPC configuration to a working zone. From what I could tell no differences existed.&lt;/p&gt;

&lt;h2&gt;The Cause&lt;/h2&gt;

&lt;p&gt;After some digging around I noticed the default VPC's subnets had been deleted. This causes the VPC and associated AZ subnets to be invalid in the default DB security group. The only way to recreate default subnets in a region is via the CLI, no web console ability for this action.&lt;/p&gt;

&lt;h2&gt;The Fix&lt;/h2&gt;

&lt;p&gt;The fix was to go into RDS subnet group configuration (&lt;a href="https://us-west-1.console.aws.amazon.com/rds/home?region=us-west-1#db-subnet-groups:" rel="noopener noreferrer"&gt;https://us-west-1.console.aws.amazon.com/rds/home?region=us-west-1#db-subnet-groups&lt;/a&gt;) and re-assign the new two new default subnets to the RDS group. After that Terraform 'plan' and 'apply' returned to working as expected.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>iac</category>
    </item>
  </channel>
</rss>
