<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ben Vilnis</title>
    <description>The latest articles on DEV Community by Ben Vilnis (@bennysbanter).</description>
    <link>https://dev.to/bennysbanter</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bennysbanter"/>
    <language>en</language>
    <item>
      <title>Composable Infrastructure</title>
      <dc:creator>Ben Vilnis</dc:creator>
      <pubDate>Mon, 15 Jul 2019 07:00:00 +0000</pubDate>
      <link>https://dev.to/bennysbanter/composable-infrastructure-49lb</link>
      <guid>https://dev.to/bennysbanter/composable-infrastructure-49lb</guid>
      <description>&lt;h2&gt;
  
  
  PART I: Introduction
&lt;/h2&gt;

&lt;p&gt;So you want to launch an app? You log onto this thing called &lt;a href="https://aws.amazon.com"&gt;AWS&lt;/a&gt; and are greeted with an endless wall of features, resources, services and documentation.&lt;/p&gt;

&lt;p&gt;After putting your eyes back in your head, you start reading and work out how to create an EC2 instance. You then work out that you'll probably want to create an autoscaling group, then spread that across multiple availability zones, put a load balancer in front of it, provision a database or two, create some S3 buckets and define a bunch of IAM policies to facilitate all these transactions safely.&lt;/p&gt;

&lt;p&gt;You are looking good! Oh, wait, everything is accessible on the internet because you've done this in the default VPC!&lt;/p&gt;

&lt;p&gt;Cool, so now you need to move it all to private networks. That'll include more documentation and reading on network routing, CIDR blocks, DNS, TLS, NAT gateways, the list goes on. After all that is set up, you'll probably want to replicate everything into multiple regions and finally get some monitoring and observability hooked up.&lt;/p&gt;

&lt;p&gt;Wow, all you wanted to do was launch an app!&lt;/p&gt;

&lt;p&gt;Doing all of the work above in the AWS console is normal when learning, I did it, we all did it. Pointing and clicking is excellent for learning, not so great when you start provisioning production infrastructure that is susceptible to frequent changes, having multiple people working on it, and the rising need for reproducibility. This is where infrastructure-as-code made its name.&lt;/p&gt;

&lt;p&gt;While provisioning your infrastructure as code is a significant improvement, it still requires a considerable amount of manual work and maintenance for all that code. Then there's the reproducibility of the code; what happens when we want to deploy multiple environments or resources? I'm sure I don't need to explain to you how much of a pain in the arse the logistics of code repetition can be.&lt;/p&gt;

&lt;p&gt;Hold on, let's go back to the start; My business goals were to launch an app, not wrangle with the endless pit of infrastructure.&lt;/p&gt;

&lt;p&gt;Surely there must be a better way?&lt;/p&gt;

&lt;p&gt;Yes, there is, and it is called "composable infrastructure".&lt;/p&gt;



&lt;h2&gt;
  
  
  PART II: Composable Infrastructure
&lt;/h2&gt;

&lt;p&gt;So what is composable infrastructure? Well, first, let's take a quick trip into the two main types of infrastructure most widely used today.&lt;/p&gt;

&lt;p&gt;We have Infrastructure-as-a-Service (IaaS). These are things like AWS and GCP. They provide you with all the knobs and switches for CPU, memory, disk, network, server, databases, and so on, but it is up to you to put it all together.&lt;/p&gt;

&lt;p&gt;Then there is Platform-as-a-Service (PaaS). These are things like Heroku and Docker Cloud. PaaS abstracts and hides the complexities of raw infrastructure and generally sits above IaaS. Now you have a simple API; here's how you deploy your app or database without needing the know-how to wire it all together under the hood.&lt;/p&gt;

&lt;p&gt;PaaS sounds excellent, right? Well, yes and no. By design, PaaS is hiding those wires under the hood and therefore has limitations. You have to work within the boundaries of the PaaS and thus are limited to the supported languages and protocols; you can't directly access the underlying resources nor monitor them, it becomes harder to debug problems, and harder to customise your architecture as your app scales.&lt;/p&gt;

&lt;p&gt;You'll see a trend with startups and new company products. They start with PaaS and take advantage of the simplicity it provides to get a product out the door, but as they grow, they start to hit the limitations mentioned above and tend to fall back to IaaS.&lt;/p&gt;

&lt;p&gt;What we need is the deployment simplicity that PaaS provides with the ability to get under the hood that IaaS provides.&lt;/p&gt;

&lt;p&gt;This is where composable infrastructure enters the picture. To explain this, let's use the analogy of a car manufacturing factory.&lt;/p&gt;

&lt;p&gt;When creating a new car model, I write a list of parts I need, go to the warehouse and retrieve those parts off the shelf, then feed them into my production line to be assembled.&lt;/p&gt;

&lt;p&gt;Many of these parts are already built and ready to be assembled. For instance, I don't want to have to build a new engine, transmission and radiator for each car. I want these parts ready-to-go. By doing so, it simplifies the production, saves time, ensures consistency, and provides the same quality across every car produced.&lt;/p&gt;

&lt;p&gt;This is the mindset behind composable infrastructure. Rather than having to manually code or provision resources every time we want to build a platform, we can have these underlying resources pre-built and ready to go. Now we merely make a list of what we need (an EC2 instance, S3 bucket and RDS database, for example), fetch them from our composable infrastructure library, and feed them into CI/CD to be built.&lt;/p&gt;

&lt;p&gt;Now we can create simple, consistent and reproducible platforms — the simplicity of PaaS, with the tweakability of IaaS.&lt;/p&gt;



&lt;h2&gt;
  
  
  PART III: What it looks like
&lt;/h2&gt;

&lt;p&gt;To demonstrate composable infrastructure, I've chosen to use &lt;a href="https://www.terraform.io"&gt;Terraform&lt;/a&gt; as it is an excellent infracoding tool and lends itself to this style of workflow.&lt;/p&gt;

&lt;p&gt;In Terraform, we can create what we call &lt;a href="https://www.terraform.io/docs/modules/index.html"&gt;modules&lt;/a&gt;. In its simplest form, a module is any piece of Terraform code we can call and execute from another location. Think of it as a blueprint. Let me show you what I mean.&lt;/p&gt;

&lt;p&gt;Let's say we have a directory with two sub-directories and few blank Terraform files that looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/infracode
  /infracode/my_module/ec2.tf
  /infracode/my_deployments/my_instances.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;ec2.tf&lt;/code&gt; we write some Terraform to provision a simple EC2 instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-west-2"
}

resource "aws_instance" "web" {
  ami = "ami-abcd1234"
  instance_type = "t2.micro"

  tags = {
    Name = "my-instance"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we have defined our Terraform provider and defined a resource (in this case, an EC2 instance) to create via the AWS provider. Cool, so if we were to run &lt;code&gt;terraform apply&lt;/code&gt; from within &lt;code&gt;/my_module&lt;/code&gt;, it would go off and build an instance with the name tag &lt;code&gt;"my-instance"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;However, what we actually want to do is reuse this code any time we need to create an instance. So let's use this existing code as a module.&lt;/p&gt;

&lt;p&gt;In our other directory &lt;code&gt;/my_deployments&lt;/code&gt; in the &lt;code&gt;my_instances.tf&lt;/code&gt; file, instead of rewriting the code, we do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "foo" {
  source = "../my_module"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we've told Terraform that we want to use a module, we give it a name and tell Terraform where it can find the source code we want to execute. Now if we run &lt;code&gt;terraform apply&lt;/code&gt; from within &lt;code&gt;/my_deployments&lt;/code&gt;, it reads the code from our source &lt;code&gt;../my_module&lt;/code&gt; and executes it. Pretty cool, right?&lt;/p&gt;

&lt;p&gt;So what if I want to create two instances? Easy; create a second module block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "foo" {
  source = "../my_module"
}

module "bar" {
  source = "../my_module"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if we run a &lt;code&gt;terraform apply&lt;/code&gt;, it identifies that there are two modules, &lt;code&gt;"foo"&lt;/code&gt; and &lt;code&gt;"bar"&lt;/code&gt;, and creates them.&lt;/p&gt;

&lt;p&gt;You may have noticed, however, that at this point, we're creating two EC2 instances both with the name tag &lt;code&gt;"my-instance"&lt;/code&gt;. So we need to start customising the parameters of each instance. To do so, we use variables. So let's go back to our base code &lt;code&gt;ec2.tf&lt;/code&gt; in the &lt;code&gt;/my_module&lt;/code&gt; directory and make some changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-west-2"
}

resource "aws_instance" "web" {
  ami = "ami-abcd1234"
  instance_type = "t2.micro"

  tags = {
    Name = "${var.instance_name}"
  }

variable = instance_name {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we've set the EC2 name tag as a blank variable called &lt;code&gt;instance_name&lt;/code&gt;. So if we move back to &lt;code&gt;my_instances.tf&lt;/code&gt; in our module blocks we can now do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "foo" {
  source = "../my_module"

  instance_name = "my-instance-1"
}

module "bar" {
  source = "../my_module"

  instance_name = "my-instance-2"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we have defined &lt;code&gt;instance_name&lt;/code&gt; in our two module blocks, and their values are inserted into the blank variable &lt;code&gt;instance_name&lt;/code&gt; in our root module &lt;code&gt;ec2.tf&lt;/code&gt;. Terraform will now create two instances, one with the name tag &lt;code&gt;"my-instance-1"&lt;/code&gt; and the other &lt;code&gt;"my-instance-2"&lt;/code&gt;. We now have a reproducible module to create EC2 instances any time we need one.&lt;/p&gt;

&lt;p&gt;While this is a simple example of some fundamental infrastructure, it demonstrates the foundations and concepts of using Terraform modules to create composable infrastructure!&lt;/p&gt;

&lt;p&gt;The best part is that as a community, we can collectively work and benefit from each other. With this outlook, we have fantastic resources like &lt;a href="https://registry.terraform.io"&gt;The Terraform Module Registry&lt;/a&gt; where, as a community, we can write and share modules for each other to use in our platform creations. There's no point all of us writing the same thing when we can benefit from each other's work!&lt;/p&gt;

&lt;p&gt;Where the real power of this process comes into play is when we create many modules and link them together to create a full deployment. For instance, if we want to deploy Hashicorp's &lt;a href="https://www.vaultproject.io/"&gt;Vault&lt;/a&gt;, rather than build the relatively complex infrastructure to run it on, we can simply use existing composable infrastructure, like &lt;a href="https://registry.terraform.io/modules/hashicorp/vault/aws/0.13.2"&gt;this Vault module&lt;/a&gt;, to build it for us. Now we truly have the deployment simplicity of PaaS, with the ability to get under the hood into the source code on the IaaS layer.&lt;/p&gt;



&lt;h2&gt;
  
  
  PART IV: Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this short journey into composable infrastructure has shown you the benefits of treating infrastructure like a lean manufacturing production line, and that you can get the best of both worlds in regards to IaaS and PaaS. The key takeaway here is to work smart, not hard. The mindset and process of getting work done are more valuable than the tools you choose to use. So be smart!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>The Phoenix Project Pt1</title>
      <dc:creator>Ben Vilnis</dc:creator>
      <pubDate>Wed, 07 Nov 2018 21:37:44 +0000</pubDate>
      <link>https://dev.to/bennysbanter/the-phoenix-project-pt1-3n4p</link>
      <guid>https://dev.to/bennysbanter/the-phoenix-project-pt1-3n4p</guid>
      <description>&lt;p&gt;Recently I was recommended by a fellow DevOps friend (and mentor) to read The Phoenix Project by Gene Kim, Kevin Behr and George Spafford, as part of my learning curriculum on the journey to becoming an engineer in the DevOps/SRE space.&lt;/p&gt;

&lt;p&gt;Let me roll out the spoiler bandwagon right now. This book is an absolute must-read for anyone in the IT industry, regardless of your department or speciality. Now, moving on.&lt;/p&gt;

&lt;p&gt;If you haven’t read it before, The Phoenix Project is about the journey of an ops technician named Bill Palmer. Bill is thrown in the deep end and is forced to take over the whole IT department of a manufacturing company that is on the verge of bankruptcy and becoming outsourced as a result of failing software solutions to compete with their competitors. What Bill finds is an internal war between development and operations teams that leaves the company with delayed and broken releases (we’ve all been there before). This clusterfuck begins a quest to solve this infamous problem of which much of the IT industry has struggled with for decades. Over the next several months, and with the mentoring from some great characters, Bill learns the principles of DevOps and completely transforms the company into an automated and smart-working machine.&lt;/p&gt;

&lt;p&gt;What practical lessons did I learn from The Phoenix Project?&lt;/p&gt;

&lt;h2&gt;
  
  
  IT work is EXACTLY like a manufacturing plant floor:
&lt;/h2&gt;

&lt;p&gt;On a plant floor, materials come in on the left and leave as finished products on the right. These materials pass through a series of work centres as they are assembled. We call this work-in-progress or “WIP.”&lt;/p&gt;

&lt;p&gt;Specific work centres can sometimes become constraints, that is, the particular task at said work centre takes longer to complete than the flow of WIP leading to it. In the IT world, we call this technical debt; new requests, problems and bugs that are coming in before existing issues have been addressed. Eventually, this leads to an unfathomable amount of technical debt. To address this, one must control the flow of WIP to the constraint as not to begin the accumulation of technical debt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visualisation gives overview:
&lt;/h2&gt;

&lt;p&gt;Being able to visualise WIP is essential to maintain its flow. Kanban boards are an excellent tool for this.&lt;/p&gt;

&lt;p&gt;A basic kanban consists of three columns: “to do,” “doing,” and “done.” These columns contain task cards. All outstanding and required task cards reside in the “to do” column. These cards are then prioritised and moved into the “doing” column in small numbers as not to create a constraints. Only once they’re completed can they be moved to the “done” column.&lt;/p&gt;

&lt;p&gt;Cards should never be introduced into the “doing” column until all its existing cards have been completed. This process ensures a constant flow of work and makes sure only priority tasks are under the microscope at any given time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ten deploys per day:
&lt;/h2&gt;

&lt;p&gt;The key to DevOps’ success is continual improvement. While developers have become agile and quick to respond to new requests over the last 15 years, operations fell behind. Devs wanted their code released quickly and regularly, while ops didn’t want to make any changes to the servers as not to break them. Devs wanted speed, ops wanted stability. In order to counter this conundrum, ops needed to become agile. We achieve this through several tools and concepts.&lt;/p&gt;

&lt;p&gt;The primary concept is called infrastructure as code or “infracoding.” In the world of cloud computing, we can write scripts or code (like a programming language) to spin up and automate cloud infrastructure as it is needed. This eliminates the need for ops to manually provision on monitor infrastructure.&lt;/p&gt;

&lt;p&gt;Our second go-to tool set is continuous testing and continuous intergration. With the use of infracoding, we can create dev and QA environments to exactly clone our production environment. We then use tools to create tests and conditions that new code must pass in order to move to the next stage. Having this whole process automated allows an incredible amount of speed when applying new code and gives us the ability to roll back to previous versions if problems arise.&lt;/p&gt;

&lt;p&gt;This, for the most part, eliminates downtime and visible problems for customers. While there are many more parts to the DevOps workflow, these key components are the driving force that allows ops to become agile and achieve speed AND stability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing comments:
&lt;/h2&gt;

&lt;p&gt;These mentioned aspects of DevOps cover the practical day-to-day applications to achieve the DevOps workflow. In a future post, I plan to go over “the three ways” of DevOps, which delves into the philosophical side of the DevOps movement and how these practices and tools fit into the larger company picture.&lt;/p&gt;

&lt;p&gt;For now, these mentioned lessons I’ve learned from The Phoenix Project are those that can be applicable to your workflow today.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>motivation</category>
      <category>leadership</category>
    </item>
  </channel>
</rss>
