<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nathan Mclean</title>
    <description>The latest articles on DEV Community by Nathan Mclean (@nathmclean).</description>
    <link>https://dev.to/nathmclean</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nathmclean"/>
    <language>en</language>
    <item>
      <title>What are you using Serverless for?</title>
      <dc:creator>Nathan Mclean</dc:creator>
      <pubDate>Mon, 04 Mar 2019 13:56:50 +0000</pubDate>
      <link>https://dev.to/nathmclean/what-are-you-using-serverless-for-27d2</link>
      <guid>https://dev.to/nathmclean/what-are-you-using-serverless-for-27d2</guid>
      <description>&lt;p&gt;My colleague recently wrote a &lt;a href="https://link.medium.com/Prusx1EqGU"&gt;blog post&lt;/a&gt; - on how we use Serverless as a DevOps team, which made me think, what are other people using it for?&lt;/p&gt;

&lt;p&gt;Are you using, like we are, to replace load of cronjobs running on an instance and for listening to events emitted by AWS? Or are you doing something different? Maybe some or all of you application is running on Serverless. I’m interested to find out....&lt;/p&gt;

</description>
      <category>devops</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Creating A Terraform Provider - Part 1</title>
      <dc:creator>Nathan Mclean</dc:creator>
      <pubDate>Tue, 20 Nov 2018 10:45:06 +0000</pubDate>
      <link>https://dev.to/nathmclean/creating-a-terraform-provider---part-1-3i28</link>
      <guid>https://dev.to/nathmclean/creating-a-terraform-provider---part-1-3i28</guid>
      <description>&lt;p&gt;Cross posting from &lt;a href="https://medium.com/spaceapetech/creating-a-terraform-provider-part-1-ed12884e06d7"&gt;Medium&lt;/a&gt;, this is a post I wrote for my employers' blog - &lt;a href="https://medium.com/spaceapetech/"&gt;Space Ape Tech&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A fairly common task in Ops is to figure out how to manage a service programmatically; it’s often easy to get started with a service by using the UI but this won’t scale and eventually, we’ll want to have more control over the configuration of a service; a Terraform provider can be the tool that gives you that control.&lt;/p&gt;

&lt;p&gt;To create a Terraform provider you just need to write the logic for managing the Creation, Reading, Updating and Deletion (CRUD) of a resource and Terraform will take care of the rest; state, locking, templating language and managing the lifecycle of the resources.&lt;/p&gt;

&lt;p&gt;Just taking a look at the list of &lt;a href="https://www.terraform.io/docs/providers/index.html"&gt;existing providers&lt;/a&gt; shows you how versatile Terraform can be. The list includes cloud providers, Kubernetes, DNS services, GitHub, monitoring tools and TLS certificate providers among others.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://spaceapegames.com"&gt;Space Ape&lt;/a&gt; we decided that we wanted to start managing the configuration of our metrics service, &lt;a href="http://wavefront.com"&gt;Wavefront&lt;/a&gt;, programmatically. We were all set to write a new tool that would allow us to template Alerts and Dashboards in YAML and then create them in Wavefront via their API. We had already started writing a &lt;a href="https://github.com/spaceapegames/go-wavefront"&gt;Go client&lt;/a&gt; for the Wavefront API when Hashicorp announced the &lt;a href="https://github.com/hashicorp/terraform/blob/v0.10.0/CHANGELOG.md"&gt;release of Terraform v0.10.0&lt;/a&gt;, which split Providers out of Terraform core allowing each provider to be developed and released independently of Terraform and for custom providers to be created.&lt;/p&gt;

&lt;p&gt;So we set about building the &lt;a href="https://github.com/spaceapegames/terraform-provider-wavefront"&gt;Wavefront Terraform Provider&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Build Your Own Provider
&lt;/h3&gt;

&lt;p&gt;To avoid having to work against a real API for this blog post, so that you can follow along if you wish, I’ve created a small API that stores (imaginatively) “items”. The API allows you to create, read, update and delete items. An item has a name, description and a list of tags. An items’ name must be unique. I won’t go into much more detail about the working of the API, you can find a more detailed description &lt;a href="https://github.com/spaceapegames/terraform-provider-example/blob/master/Readme.md"&gt;here&lt;/a&gt; and the code in &lt;a href="https://github.com/spaceapegames/terraform-provider-example/tree/master/api"&gt;terraform-provider-items/api&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The client that the Terraform provider will use to interact with the API shouldn’t be implemented within the provider itself, as that would overcomplicate the provider and mean that the client can’t be used for other purposes. So I’ve also written a client in the &lt;a href="https://github.com/spaceapegames/terraform-provider-example/tree/master/api/client"&gt;API package&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The full source code of the example provider and API is available on &lt;a href="https://github.com/spaceapegames/terraform-provider-example"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Getting Started
&lt;/h4&gt;

&lt;p&gt;The provider we’re going to build during this blog post will allow us to create an Item on our server using the following Terraform code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OAiu-J67--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AyFSDdnXt1mBwq1qC1D5RdA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OAiu-J67--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AyFSDdnXt1mBwq1qC1D5RdA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform Plugins are binaries that Terraform communicates with via RPC. It’s possible to write a provider in any language, but in reality, you’ll want to write it in Go; Terraform provide helper libraries in Go to aid in writing and testing providers.&lt;/p&gt;

&lt;p&gt;The name of the repository (and therefore directory) that your provider lives in is important; all providers start with &lt;code&gt;terraform-provider-&lt;/code&gt; anything after that represents the name of the provider. In this case, I’m going for the very imaginative &lt;code&gt;terraform-provider-example&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, we’ll create &lt;code&gt;./main.go&lt;/code&gt; which will serve as the entry point to our provider. &lt;code&gt;main.go&lt;/code&gt; is just used to invoke our provider, which we will implement in a separate package, in this case, called &lt;code&gt;provider&lt;/code&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Now, we’ll create the provider package, within which will sit the implementation of your provider. Within the &lt;code&gt;provider&lt;/code&gt; package, we’ll create &lt;code&gt;provider.go&lt;/code&gt; and define the &lt;code&gt;Provider&lt;/code&gt; function that our &lt;code&gt;main.go&lt;/code&gt; called.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The Provider requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;Schema&lt;/code&gt; which represents the various attributes we can provide to our provider via the provider block of a Terraform file. Note that if no value is provided we will check if environment variables are set. This is useful for making sure we don’t need to store secrets in the provider block of terraform files&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;ResourceMap&lt;/code&gt; defines the names of the resources the provider has and where to find the definition of those resources. In this case, you can see we have &lt;code&gt;example_item&lt;/code&gt; resource, the definition of which is a &lt;code&gt;*schema.Resource&lt;/code&gt; returned by the &lt;code&gt;resourceItem()&lt;/code&gt; function, which we’ll define later&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;ConfigureFunc&lt;/code&gt; which can do any setup for us. In this case, we have &lt;code&gt;providerConfigure&lt;/code&gt; which takes the &lt;code&gt;address&lt;/code&gt;, &lt;code&gt;port&lt;/code&gt; and &lt;code&gt;token&lt;/code&gt; and returns a client that we’ll use to communicate with the API. Note the &lt;code&gt;providerConfigure&lt;/code&gt; returns an &lt;code&gt;interface{}&lt;/code&gt; so we can store anything we like here&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Defining Our Item Resource
&lt;/h4&gt;

&lt;p&gt;Before we start defining our resource there are some more naming conventions to cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource files are named &lt;code&gt;resource_[resourceName].go&lt;/code&gt;, eg &lt;code&gt;resource_item.go&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test files follow the usual Go naming standard of resource &lt;code&gt;_[resourceName]_test&lt;/code&gt;, eg &lt;code&gt;resource_item_test.go&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;We can also write import tests which, by convention, are in files named &lt;code&gt;import_[resourceName]_test.go&lt;/code&gt;, eg &lt;code&gt;import_item_test.go&lt;/code&gt;. Although these tests could sit within the &lt;code&gt;resource_[resourceName]_test.go&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Lastly, there are also data source definitions, which we won’t be implementing here, but you may see in other providers. These are named &lt;code&gt;data_source_[resourceName].go&lt;/code&gt;. Data Sources allow you to pull in information from resources that already exist, but that you don’t want to manage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We start by creating &lt;code&gt;resource_item.go&lt;/code&gt; and defining the &lt;code&gt;resourceItem()&lt;/code&gt; function that we call from &lt;code&gt;provider.go&lt;/code&gt;. This returns a &lt;code&gt;*schema.Resource&lt;/code&gt;— the definition of our Item resource.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;A &lt;code&gt;schema.Resource&lt;/code&gt; needs us to set up a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;Schema&lt;/code&gt;— the attributes the Resources has&lt;/li&gt;
&lt;li&gt;A number of functions. Earlier I said you just need to set up the Create, Read, Update and Delete functions for the provider, there are also two more, Exists and Imports, I’ll cover these later on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s look at the &lt;code&gt;Schema&lt;/code&gt; in more detail. The Schema element is a &lt;code&gt;_map[string]*schema.Schema&lt;/code&gt;, where the string is the attribute name and the &lt;code&gt;*schema.Schema&lt;/code&gt; defines what the attribute is. In this case, the attribute &lt;code&gt;name&lt;/code&gt; is of a Type &lt;code&gt;TypeString&lt;/code&gt;, it is required and has a short description. It also has &lt;code&gt;ForceNew&lt;/code&gt; set to true, this is because the API doesn’t allow you to change the name of an item after it is created, therefore Terraform would have to destroy the resource and create it again for this to happen.&lt;/p&gt;

&lt;p&gt;Lastly, there is a &lt;code&gt;ValidateFunc&lt;/code&gt; which is a function with the signature &lt;code&gt;(v interface{}, k string) (ws []string, es []error)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Where &lt;code&gt;v&lt;/code&gt; is the value of the attribute, which you need to use type assertion on to retrieve the actual type, &lt;code&gt;k&lt;/code&gt; is the name of the attribute (“name” in this case) and it returns a slice of warnings (strings) and a slice of errors.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;ValidationFunc&lt;/code&gt; used for the &lt;code&gt;name&lt;/code&gt; attribute is the &lt;code&gt;validateName&lt;/code&gt; function:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;It returns an error if the name has any whitespace (The API doesn’t allow whitespace in names). This prevents us from making API calls we know will fail.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;description&lt;/code&gt; attribute is similar to name but doesn’t force a new resource and doesn’t have a validation func.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;tags&lt;/code&gt; attribute has a few differences. It is of type &lt;code&gt;TypeSet&lt;/code&gt;, which is similar to a list, but where order doesn’t matter. You can use &lt;code&gt;TypeList&lt;/code&gt; when you can be sure that the API will always return a list in the same order, otherwise, you’ll always have changes to apply. You’ll also notice that tags has a &lt;code&gt;Elem&lt;/code&gt; field, which lets us define the type that is stored in the Set (or List), in this case, a string.&lt;/p&gt;

&lt;p&gt;In our case, the API purposely returns tags in a random order. You can try changing the TypeSet to TypeList to see that there will (nearly) always be changes to apply due to the reordering of the tags.&lt;/p&gt;

&lt;p&gt;It’s also worth noting that when you have a &lt;code&gt;TypeSet&lt;/code&gt; or &lt;code&gt;TypeList&lt;/code&gt; the &lt;code&gt;Elem&lt;/code&gt; field can be of type &lt;code&gt;&amp;amp;schema.Resource&lt;/code&gt;, which allows for a deeper resource structure. For instance, this is how the listener blocks of an &lt;a href="https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_elb.go#L159"&gt;aws_elb resource are defined&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also notice that tags are not a required attribute, but rather than just setting &lt;code&gt;Required&lt;/code&gt; to false (it’s default value), we set &lt;code&gt;Optional&lt;/code&gt; to true. Either &lt;code&gt;Required&lt;/code&gt; or &lt;code&gt;Optional&lt;/code&gt; must be true&lt;/p&gt;

&lt;h4&gt;
  
  
  Functions
&lt;/h4&gt;

&lt;p&gt;The Create, Read, Update and Delete functions are functions with the signature &lt;code&gt;(d *schema.ResourceData, m interface{})&lt;/code&gt; error. Where &lt;code&gt;d&lt;/code&gt; is essentially the Schema we defined above, but with values added, e.g. the “name” attribute has a value we can send to our API.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;m&lt;/code&gt; is the &lt;code&gt;interface{}&lt;/code&gt; returned from the &lt;code&gt;ConfigureFunc&lt;/code&gt; in &lt;code&gt;provider.go&lt;/code&gt;, which in our case is our Client for talking to the server.&lt;/p&gt;

&lt;p&gt;Each of the CRUD functions essentially just call their corresponding methods in the client, but there are a few things to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you create a resource you need to set the ID of the Terraform resource, this is done using the &lt;code&gt;d.SetId method&lt;/code&gt;. The ID is generally the what the API uses to uniquely identify an item, in our case name is the ID, so we use that — &lt;code&gt;d.SetId(item.Name)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Setting the ID to an empty string indicates to Terraform the item no longer exists. So in the &lt;code&gt;resourceDeleteItem()&lt;/code&gt; function we call &lt;code&gt;d.SetId("")&lt;/code&gt; after deleting the Item. We also do this in the &lt;em&gt;resourceReadItem&lt;/em&gt; function if we get an error and that error contains “not found”&lt;/li&gt;
&lt;li&gt;For each of these methods we obtain the Client from &lt;code&gt;m&lt;/code&gt;, as it is an interface we have to assert the type as the client type — &lt;code&gt;apiClient := m.(*client.Client)&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As the underlying data structure for the &lt;code&gt;d *schema.ResourceData&lt;/code&gt; ends up being an interface, rather than a solid type we end up doing a lot of type assertion. On a small resource like our Item it is easy enough to do it within the function but, for larger resources, it may be worth splitting these out into separate functions.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;resourceCreateItem&lt;/code&gt; function, we can see that we use &lt;code&gt;d.Get&lt;/code&gt; to retrieve the values from the resource that we wish to pass to the API via the client and that we must perform type assertion on the result &lt;code&gt;d.Get("name").(string)&lt;/code&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The Exists function (&lt;code&gt;resourceExistsItem()&lt;/code&gt;) is slightly different in that it doesn't modify Terraform state, nor does it update any resource on the server; it’s just used to check if a resource exists. It has a slightly different function signature than the CRUD functions in that it also returns a bool to indicate if the resource exists — &lt;code&gt;resourceExistsItem(d *schema.ResourceData, m interface{}) (bool, error)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, there is an import function which is used for importing a resource into Terraform, an import function is an implementation of &lt;code&gt;&amp;amp;schema.ResourceImporter&lt;/code&gt;. Terraform provides an implementation of this called &lt;code&gt;schema.ImportStatePassthrough&lt;/code&gt;, which seems to work for the majority of use cases and you can always write your own implementation if you need to.&lt;/p&gt;

&lt;h4&gt;
  
  
  End of Part 1
&lt;/h4&gt;

&lt;p&gt;At this point we have a functional Terraform provider, you could compile it and start creating &lt;em&gt;Items&lt;/em&gt; with Terraform. However, there is a important piece missing —  &lt;strong&gt;tests!&lt;/strong&gt; In part 2 we’ll add tests to our Provider and run through how we get Terraform to use the provider.&lt;/p&gt;




</description>
      <category>terraform</category>
      <category>hashicorp</category>
      <category>devops</category>
      <category>go</category>
    </item>
    <item>
      <title>A Journey to Better Deployments</title>
      <dc:creator>Nathan Mclean</dc:creator>
      <pubDate>Mon, 08 Oct 2018 09:58:52 +0000</pubDate>
      <link>https://dev.to/nathmclean/a-journey-to-better-deployments-31e3</link>
      <guid>https://dev.to/nathmclean/a-journey-to-better-deployments-31e3</guid>
      <description>&lt;p&gt;Cross posting from &lt;a href="https://medium.com/spaceapetech/a-journey-to-better-deployments-a255eb69bbf2" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, this is a post I wrote for my employers' blog - &lt;a href="https://medium.com/spaceapetech/" rel="noopener noreferrer"&gt;Space Ape Tech&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AOuFo5e3J3oy_PATxAnWTHw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AOuFo5e3J3oy_PATxAnWTHw.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the past year, we’ve been working to improve our deployment process. Over that time we’ve built custom tooling and started using new (and said goodbye to old) tools and practices. This post will try and give an overview of why we wanted a new deployment process, how we went about implementing it and what we have now.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Old System
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Infrastructure
&lt;/h4&gt;

&lt;p&gt;Our infrastructure was built using CloudFormation. CloudFormation templates were generated from &lt;a href="https://github.com/cfndsl/cfndsl" rel="noopener noreferrer"&gt;CFNDSL&lt;/a&gt;, which in turn were wrapped in an in-house Ruby Gem which made it a little easier to work with.&lt;/p&gt;

&lt;p&gt;A CloudFormation stack generally produced an Auto Scaling Group, fronted by a Classic Load Balancer, with associated Security Groups and IAM roles and profiles.&lt;/p&gt;

&lt;p&gt;CloudFormation was also used to set up other parts of our AWS infrastructure, such as VPC networking, CloudFront distributions and S3 Buckets.&lt;/p&gt;

&lt;p&gt;All of our instances were managed by Chef. Chef ran every 15 minutes to configure our systems as we specified, it controlled every aspect of the machine. That means managing the configuration of the OS, installing system packaging and ensuring our instances were kept up to date and patched. It is also responsible for installing and configuring our applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployments
&lt;/h3&gt;

&lt;p&gt;All deployments were orchestrated by Chef. When a deployment happened it was controlled by us and happened in one of three ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Environments could be set up to take the latest available package on each Chef run. We used this to deploy the latest versions of our code when a Jenkins builds completed.&lt;/li&gt;
&lt;li&gt;Deployments to other development environments were made using either a Web UI or via a CLI — Knife plugin — (both effectively using the same Ruby code underneath) which told Chef to deploy a specific version of code and configuration.&lt;/li&gt;
&lt;li&gt;Production deploys were driven via command line on developers machines using a Knife plugin. This allowed a higher degree of control over the deployment. For instance, a deploy typically consisted of a canary node being deployed, a developer checking the logs and metrics before deciding whether or not to continue the deploy. If the deployment was continued then a rolling deploy to the remaining instances was started. Otherwise, the canary was rolled back.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Develop a New Deployment Process?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Faster Deployments
&lt;/h4&gt;

&lt;p&gt;Each deploy required a full Chef run, meaning we were checking all other parts of the system, just to install a Jar and a YAML configuration file and restart a service.&lt;/p&gt;

&lt;p&gt;Deploys from CI took too long, Chef ran every 15 minutes, which means we waited, on average 7.5 minutes for a deployment to start, after a new version had been published. Developers want to see if their code works ASAP.&lt;/p&gt;

&lt;p&gt;Production deployments took a long time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We took an instance out of the load balancer&lt;/li&gt;
&lt;li&gt;Ran chef&lt;/li&gt;
&lt;li&gt;Added the instance back in the load balancer&lt;/li&gt;
&lt;li&gt;Waited for a developer to check the logs&lt;/li&gt;
&lt;li&gt;Repeated steps 1–3 for the remaining instances (or rollback)&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Safer Deployments
&lt;/h4&gt;

&lt;p&gt;Chef managed servers are mutable — they change over time. We were never 100% certain if we could build a new instance from scratch to the current state, this can lead to problems when machines (inevitably) die, or when we needed to scale up new instances.&lt;/p&gt;

&lt;p&gt;Production deployments were run from developers machines, which meant there were any number of number of factors that caused problems for deployments, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Did the developer have all the pre-requisites installed?&lt;/li&gt;
&lt;li&gt;Did they have the correct AWS credentials?&lt;/li&gt;
&lt;li&gt;Was their network connection stable? What happened if it dropped out part way through a deployment?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We relied heavily on developers checking logs and metrics to confirm that the new version we were deploying was working as expected. Much of this could be automated.&lt;/p&gt;

&lt;h3&gt;
  
  
  The New System
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Targets
&lt;/h4&gt;

&lt;p&gt;We set our requirements for our deployment system as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deployments should happen in seconds or minutes (not 10’s of minutes or longer)&lt;/li&gt;
&lt;li&gt;Deployments should not be run from a developers machine&lt;/li&gt;
&lt;li&gt;We should deploy Immutable instances&lt;/li&gt;
&lt;li&gt;Automate as many post-deployment checks as possible&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Phase 1
&lt;/h4&gt;

&lt;p&gt;As we wanted immutable instances it was clear that we needed to bake application code into AMIs (Amazon Machine Images). We’d then need to orchestrate the deployment of these AMIs. We decided to use &lt;a href="https://www.spinnaker.io/" rel="noopener noreferrer"&gt;Spinnaker&lt;/a&gt; to do this. Spinnaker is “an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence”.&lt;/p&gt;

&lt;p&gt;Essentially it allows you to define a pipeline defining how you want a deployment to happen.&lt;/p&gt;

&lt;p&gt;A minimal Pipeline consists of a Bake stage and a deployment stage. A deployment stage can be made of a number of different &lt;a href="https://www.spinnaker.io/concepts/#deployment-strategies" rel="noopener noreferrer"&gt;deployment strategies&lt;/a&gt; that are built in (for example Red/Black, Canary or Highlander) or you can define your own. You’re also free to add other steps, such as adding optional steps to rollback or steps that can perform checks.&lt;/p&gt;

&lt;p&gt;By using Spinnaker it is clear we are going to be able to meet our goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployments of Baked AMIs should be fast using Spinnaker&lt;/li&gt;
&lt;li&gt;Safe as we can promote an AMI through environments&lt;/li&gt;
&lt;li&gt;Our instances are immutable&lt;/li&gt;
&lt;li&gt;We’re also not running these deployments from our own machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But we haven’t done much in terms of automating deployment checks.&lt;/p&gt;

&lt;p&gt;We quickly found that whilst the Spinnaker UI was perfect for our DevOps team to use, there is too much unnecessary information in there for many users (members of development teams that are just interested in deploying code, not the details of how the deployments work).&lt;/p&gt;

&lt;p&gt;We decided to build a system to wrap Spinnaker (and any future deployment types) in order to abstract away some of the complexities of tools like Spinnaker and also to overlay some of the terminology used by development teams.&lt;/p&gt;

&lt;p&gt;We chose to build an API, based on Ruby on Rails, which which will store information about our environments and services as well as allowing us to use background workers to track the state of deployments.&lt;/p&gt;

&lt;p&gt;In front of the API we’ve built a React application, which acts as the primary user interface to the API.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pause and Reflect
&lt;/h4&gt;

&lt;p&gt;We now had a working deployment system, but before we continued we took time to pause and review our progress with our users. People had been used to our old deployment system and like any change it can be difficult to adjust. Many things that seem intuitive when developing a system turn out not to be for users.&lt;/p&gt;

&lt;p&gt;From this process, we came up with a few things we wanted to change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The time to the first deployment (after a CI build of our application) is still too high. Baking an AMI can take several minutes, meaning we are only slightly faster than Chef deployed instance.&lt;/li&gt;
&lt;li&gt;Our Web interface required some work to become more intuitive and better display information to users.&lt;/li&gt;
&lt;li&gt;Integration between tools could be better. For instance, Jenkins jobs were starting Spinnaker deployments directly, skipping our own tooling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This feedback has led to a number of improvements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2
&lt;/h3&gt;

&lt;h4&gt;
  
  
  ‘Trigger’ Deploy
&lt;/h4&gt;

&lt;p&gt;To speed up the first deployment times we’ve had to make some compromises. We realised that we were never going to make AMI baking fast enough, so instead we’ve broken one of our initial requirements — immutable instances — in the interest of speed.&lt;/p&gt;

&lt;p&gt;We’ve developed a deployment strategy which we have dubbed Trigger Deploy. The full workings of Trigger Deploy could make an entire blog post on their own, so I won’t go into too much detail here.&lt;/p&gt;

&lt;p&gt;Essentially we pass a message to long-lived instances via SNS and SQS to deploy a new version of a package. Each instance listens to the queue for a message that it’s interested in and then acts upon it.&lt;/p&gt;

&lt;p&gt;This means that we now deploy in seconds following a successful CI build of code. But we’re a little more prone to errors, in this case, we feel it’s an acceptable compromise.&lt;/p&gt;

&lt;h4&gt;
  
  
  Web Interface
&lt;/h4&gt;

&lt;p&gt;We actively solicited feedback in both face-to-face sessions and via a feedback button embedded in the tool to collect a lot of feedback from our users about how we can improve the UI. Such as where we navigate the user to after certain actions and how we lay out information so that it’s easy to read, balancing providing all the information a user needs with not overloading them.&lt;/p&gt;

&lt;p&gt;We also pulled in additional information, such as the current state of an environment.&lt;/p&gt;

&lt;p&gt;This is an ongoing exercise. Gathering and acting on user feedback through meetings, a feedback sheet and ad-hoc conversions means that we’re always improving the system.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tooling Integration
&lt;/h4&gt;

&lt;p&gt;We wanted our deployment API to be the central source of information for all deployments — deploys should be started, controlled by and monitored via the API. One way of doing this was through the React UI. But sometimes this doesn’t cut it — we need to programmatically interact with the API, for instance, if we want Jenkins to start a deployment after a CI build.&lt;/p&gt;

&lt;p&gt;Of course, this could be achieved via the API, but dealing with authentication and setting up each Jenkins job was more difficult than just calling Spinnaker directly (Spinnaker jobs can be set up to trigger from Jenkins builds)&lt;/p&gt;

&lt;p&gt;So we built a CLI, this lets us abstract most of the pain away from Jenkins calling the API directly and also gives users the option of using the CLI rather than having to use the UI.&lt;/p&gt;

&lt;h4&gt;
  
  
  Terraform
&lt;/h4&gt;

&lt;p&gt;We’ve also started using Terraform, rather than CloudFormation for building out infrastructure. One of the primary benefits this gives us is being able to share information between stacks using remote state, rather than have a global configuration file. Other benefits include the use of modules, which give us a better interface than our old CloudFormation templates.&lt;/p&gt;

&lt;p&gt;A side effect of using Spinnaker is that out stacks are a little simpler, in that we don’t have to create the Auto Scaling Group, as Spinnaker manages that for us.&lt;/p&gt;

&lt;h3&gt;
  
  
  API
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F797%2F1%2AsGXFPkLnJnQ7eWCO4zRDAA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F797%2F1%2AsGXFPkLnJnQ7eWCO4zRDAA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The API has now become the glue that holds our deployments together. It is the source of truth for all deployment related activities.&lt;/p&gt;

&lt;p&gt;Not only does it orchestrate Spinnaker and ’Trigger’ deployments, it has been built in such a way we can add new deployment types as we go. We currently have prototypes for deploying AWS Lambda applications using &lt;a href="https://github.com/awslabs/serverless-application-model" rel="noopener noreferrer"&gt;SAM&lt;/a&gt; and for running &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’re also investigating how we can manage the configuration of our services via the API.&lt;/p&gt;

&lt;p&gt;With the API at the centre of our deployment workflow we have been able to leverage it to create a number of tools, such as the React frontend, a Golang CLI and a Lambda based Slack bot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The result of our work so far is that deployments are faster, it takes just a few seconds to deploy following a new version of the code being made available.&lt;/p&gt;

&lt;p&gt;It takes a few minutes for the next environment, but we have a lot more confidence that the deployment will be successful.&lt;/p&gt;

&lt;p&gt;Prod deployments are no longer run from a user’s machine and are faster than they used to be. We haven’t made as much progress as we’d have liked when it comes to the automated checking of the deployments health, this is something we’ll continue to work on going forward.&lt;/p&gt;

&lt;p&gt;Going through this process has highlighted how important it is to check in with your users at regular intervals to see if you are meeting their requirements.&lt;/p&gt;

&lt;p&gt;This is just the beginning of the journey to better deployments, we’ve laid a solid foundation, which we can build on to enable us to manage different deployment types as well as to continue to make deployments faster and safer.&lt;/p&gt;




</description>
      <category>devops</category>
      <category>rails</category>
      <category>infrastructure</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Which tools are you using for CI and why?</title>
      <dc:creator>Nathan Mclean</dc:creator>
      <pubDate>Mon, 24 Sep 2018 21:59:34 +0000</pubDate>
      <link>https://dev.to/nathmclean/which-tools-are-you-using-for-ci-and-why-350b</link>
      <guid>https://dev.to/nathmclean/which-tools-are-you-using-for-ci-and-why-350b</guid>
      <description>&lt;p&gt;In work we use Jenkins to manage our builds, this includes build, tests and packaging for Java, Go, Ruby, IOS and Android as well as various ad-hoc jobs.&lt;/p&gt;

&lt;p&gt;My colleague pointed out that Jenkins is probably the only tool we try and use as one size fits all, anywhere else we have greater freedom to choose the right tool for the job.&lt;/p&gt;

&lt;p&gt;I'm interested in which CI tools people are using and why you're using them. Are you using specific build tools for certain technologies or one for all?&lt;/p&gt;

&lt;p&gt;Are you using pipelines as code? How do you control which versions of plugins you use (i.e. how can you give users the freedom to complete their tasks without having to raise a ticket for another team to add/upgrade a plugin)?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>devops</category>
      <category>ci</category>
    </item>
    <item>
      <title>Testing with Dynamo Local and Go</title>
      <dc:creator>Nathan Mclean</dc:creator>
      <pubDate>Fri, 10 Aug 2018 19:49:02 +0000</pubDate>
      <link>https://dev.to/nathmclean/testing-with-dynamo-local-and-go-4d1l</link>
      <guid>https://dev.to/nathmclean/testing-with-dynamo-local-and-go-4d1l</guid>
      <description>&lt;p&gt;I’ve recently done some work with Go and DynamoDB and needed to test my work. Luckily Amazon provides &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html"&gt;DynamoDB local&lt;/a&gt;, which means that I don’t need to provision any real infrastructure in AWS and can run my tests offline. This post will walk through a simple example of interacting with DynamoDB with Go and how to test this code with Dynamo Local. All of the code for this example can be found in &lt;a href="https://github.com/nathmclean/dynamodb-local-testing"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I use the &lt;a href="http://github.com/guregu/dynamo"&gt;guregu/dynamo&lt;/a&gt; library to interact with Dynamo as I find it provides a nice abstraction to the DynamoDB API.&lt;/p&gt;

&lt;p&gt;To start with I create a struct representing the Items I wish to store in Dynamo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item.go  
type Item struct {
   Id          string    `dynamo:"item_id,hash"`
   Name        string    `dynamo:"name"`
   Description string    `dynamo:"description"`
   CreatedAt   time.Time `dynamo:"created_at"`
   UpdatedAt   time.Time `dynamo:"updated_at"`
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the tags on the struct fields, these provide some addition information to the &lt;a href="http://github.com/guregu/dynamo"&gt;guregu/dynamo&lt;/a&gt; library on how to marshal and unmarshal data to and from Dynamo. For instance, the ‘Id’ field will be shown in DynamoDb as ‘item_id’ and it will be a hash (primary key). The hash is only required here when we want to create Dynamo Tables using this type, as we will later.&lt;/p&gt;

&lt;p&gt;Next, I’ll create an ItemService which will hold the DynamoDB client and for which we can write methods to interact with Dynamo, such as adding items to the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item.go  
type ItemService struct {
   itemTable dynamo.Table
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, come a couple of methods, one that creates a new dynamo.Table, which is a client for communicating with Dynamo and one that creates a new ItemService&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item.go  
func newDynamoTable(tableName, endpoint string) (dynamo.Table, error) {
   if tableName == "" {
    return dynamo.Table{}, fmt.Errorf("you must supply a table name")
   }
   cfg := aws.Config{}
   cfg.Region = aws.String("eu-west-2")
   if endpoint != "" {
      cfg.Endpoint = aws.String(endpoint)
   }
   sess := session.Must(session.NewSession())
   db := dynamo.New(sess, &amp;amp;cfg)
   table := db.Table(tableName)
   return table, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a few things going on here. First we’re checking that we have been provided with a tableName (the name of the table we’re connecting to). We make this function private, so that we’re in control of when it’s used — we only need to create a dynamo client when we’re setting up our ItemService, or testing.&lt;/p&gt;

&lt;p&gt;Next we’re setting up an AWS session with some configuration. In a real-world use case we’d also pass in the AWS region, rather than hard-coding it.&lt;/p&gt;

&lt;p&gt;If we’ve been provided with an endpoint we’ll point the client to that, rather than allowing the client to use it’s default endpoints. In normal usage, we won’t supply an endpoint, but for testing it allows us to point to Dynamo Local.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item.go  
func NewItemService(itemTableName string) (*ItemService, error) {
   dynamoTable, err := newDynamoTable(itemTableName, "")
   if err != nil {
       return nil, err
   }
   return &amp;amp;ItemService{
      itemTable: dynamoTable,
   }, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function takes the name of our item table, sets up a client using the ‘newDynamoTable’ function we discussed above and finally returns a new ItemService which holds the dynamo client. Notice that we always send an empty string as the endpoint, this means we can’t accidentally send an invalid endpoint when we’re in production, but it does mean we can’t use this function in our tests.&lt;/p&gt;

&lt;p&gt;Now we need some methods to interact with DynamoDB to read and write Items. These methods will be associated with the ItemService so that they have access to the Dynamo client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item.go  
func (i *ItemService) CreateItem(item *Item) error {
   now := time.Now()
   item.CreatedAt = now
   item.UpdatedAt = now
   item.Id = xid.New().String()
   return i.itemTable.Put(item).Run()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we add a new Item to the database. We set the created and updated times to the current time, generate an Id for the item and the use the Put method to write to the Dynamo Table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item.go  
func (i ItemService) GetItem(item Item) error {
    return i.itemTable.Get("item_id", item.Id).One(item)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example of reading from DynamoDB.&lt;/p&gt;

&lt;p&gt;Note that these are basic examples and don’t have any verification on the Items we’re operating on or any detailed error handling. For instance we would probably want to check that the Item we’re are creating has a Name and Description before we try and actually create it. There may also be errors we can handle. For instance, if we were throttled by Dynamo, we could retry with an incremental backoff (wait for a longer period of time between each retry).&lt;/p&gt;

&lt;p&gt;Next, it’s time to test the code. In each test I create a new, randomly named, Dynamo table that each test can run against, meaning that two tests won’t clash if they are run in parallel.&lt;/p&gt;

&lt;p&gt;I created a &lt;a href="https://github.com/nathmclean/dynamodb-local-testing/blob/master/test_utils/dynamo-local.go"&gt;test_utils package&lt;/a&gt; that creates a new table, using an interface as a schema.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// test_utils/dynamo-local.go
func CreateTable(table interface {}) (string, error) {
   cfg := aws.Config{
      Endpoint: aws.String("http://localhost:9000"),
      Region: aws.String("eu-west-2"),
   }
   sess := session.Must(session.NewSession())
   db := dynamo.New(sess, &amp;amp;cfg)
   tableName := xid.New().String()
   err := db.CreateTable(tableName, table).Run()
   if err != nil {
      return"", err
   }
   return tableName, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ‘table interface{}’ that the function accepts in our case will be the ‘Item’ type we created at the start of this post. By taking an interface we can reuse this function to create any table. Of course, this function will fail if you pass an interface that cannot be made to represent a Dynamo Table . For instance, a struct that does not have a tag defining the hash key — (&lt;code&gt;dynamo:”hash”&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type OtherItem struct {  
   Id string `dynamo:"item_id,hash"` 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The above would be sufficient to create a table that we can use to test the Item type. The other fields will be added to the table when we create Item’s in the database. As long as the keys match there isn’t a problem.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now in each test of ItemService methods, we can use this function to set up a table for us. To save some repetition I’ve created a ‘newItemService’ function in item_test.go that will call ‘CreateTable’ and set up an ItemService configured to use that table and to use the Dynamo Local endpoint (we’ll set up Dynamo Local later).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item_test.go  
func newItemService() (*ItemService, error) {
   tableName, err := test\_utils.CreateTable(Item{})
   if err != nil {
      return nil, fmt.Errorf("failed to set up table. %s", err)
   }

   db, err := newDynamoTable(tableName, "http://localhost:9000")
   if err != nil {
      return nil, err
   }
   service := ItemService{
      itemTable: db,
   }
   return &amp;amp;service, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tests for CreateItem and GetItem follow the same pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setup a slice of test conditions (table tests)&lt;/li&gt;
&lt;li&gt;Setup the ItemService using ‘newItemService’&lt;/li&gt;
&lt;li&gt;Run through each test condition and evaluate if it passes or fails.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// item/item\_test.go  
func TestItemService\_CreateItem(t *testing.T) {
   cases := [] struct {
      name string
      item Item
      err bool // Whether we expect an error back or not
   }{
      {
         name: "created successfully",
         item: &amp;amp;Item{
            Name: "spoon",
            Description: "shiny",
         },
      },
   }

   service, err := newItemService()
   if err != nil {
      t.Fatal(err)
   }

   for _, c := range cases {
      t.Run(c.name, func (t testing.T) {
         err := service.CreateItem(c.item)
         if c.err {
            assert.Error(t, err)
         } else {
            assert.NoError(t, err)
            assert.NotEqual(t, time.Time{}, c.item.CreatedAt)
            assert.NotEqual(t, time.Time{}, c.item.UpdatedAt)
         }
      })
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We create a slice of an anonymous struct, to which we add a name (or short description of the test), the item we want to test and whether we expect it to succeed. You could also add the error you expect to receive, if you want to test specific error conditions.&lt;/p&gt;

&lt;p&gt;We then create out ItemService and fail the test (t.fatal) if any of this process fails.&lt;/p&gt;

&lt;p&gt;Next, we iterate through each test case, with each case being a subtest (t.Run). We use the name we defined for each test case as the name parameter to t.Run, which helps us identify which test failed and what we were trying to test with that test case.&lt;/p&gt;

&lt;p&gt;For each case, we use &lt;a href="https://github.com/stretchr/testify"&gt;assert&lt;/a&gt; to check for errors. If we expected to get an error we assert that we did receive one and visa versa if we don’t expect an error. If we didn’t expect to receive an error we also check that the item has the time set correctly (remember we set this time in the CreateItem method).&lt;/p&gt;

&lt;p&gt;Now we have tests that will set up tables in Dynamo Local and test our code using those tables, but we don’t have an instance of Dynamo Local to run against… We’ll use Docker to set this up.&lt;/p&gt;

&lt;p&gt;First we’ll set up a ‘Dockerfile’&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM openjdk:7
RUN mkdir -p opt/dynamodb
WORKDIR /opt/dynamodb
RUN wget https://s3.eu-central-1.amazonaws.com/dynamodb-local-frankfurt/dynamodb\_local\_latest.tar.gz -q -O - | tar -xz
EXPOSE 8000
ENTRYPOINT ["java", "-jar", "DynamoDBLocal.jar"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This builds on top of the openjdk container, setups up a directory for dynamo local, downloads and extracts the jar, open a port for it to listen on and then sets the entrypoint to ensure that Dynamo Local starts when the container does.&lt;/p&gt;

&lt;p&gt;Next, we’ll set up a docker-compose.yml which will allow us to use docker-compose to build, start and stop our container for us.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dynamo:
  build: .
  ports:
    - 9000:8000
  command:
    -sharedDb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This builds from The Dockerfile we created above in the same directory, maps our local port 9000 to the containers port 8000 (You may have noticed that we used localhost:9000 as the endpoint for DynamoDB in our code).&lt;/p&gt;

&lt;p&gt;Finally, it sends the -sharedDb flag to Dynamo Local when we start the container. If we don’t use this flag then each request will use a different table, which means we can’t do things like reading the Item we just created to check if the create worked.&lt;/p&gt;

&lt;p&gt;Next up is to create a Makefile which will run docker-compose to setup dynamo local, run the tests and the use docker-compose to stop dynamo local.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TEST?=$$(go list ./... |grep -v 'vendor')
GOFMT\_FILES?=$$(find . -name '\*.go' |grep -v vendor)

default: test

fmt:
   gofmt -w $(GOFMT\_FILES)

test: fmt
   docker-compose down
   docker-compose up -d --build --force-recreate
   go test -i $(TEST) || exit 1
   echo $(TEST) | \
      xargs -t -n4 go test -v
   docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I can just run make and my tests will run, hopefully successfully.&lt;/p&gt;

</description>
      <category>dynamo</category>
      <category>go</category>
      <category>dynamodb</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
