<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rene Hernandez</title>
    <description>The latest articles on DEV Community by Rene Hernandez (@renehernandez).</description>
    <link>https://dev.to/renehernandez</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/renehernandez"/>
    <language>en</language>
    <item>
      <title>Introducing appfile: a declarative way of managing apps in DigitalOcean App Platform</title>
      <dc:creator>Rene Hernandez</dc:creator>
      <pubDate>Wed, 25 Nov 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/renehernandez/introducing-appfile-a-declarative-way-of-managing-apps-in-digitalocean-app-platform-2ndn</link>
      <guid>https://dev.to/renehernandez/introducing-appfile-a-declarative-way-of-managing-apps-in-digitalocean-app-platform-2ndn</guid>
      <description>&lt;p&gt;I have been experimenting with DigitalOcean App Platform for a while and I like how it helps me focus on defining only what I need to run my apps. Using the &lt;code&gt;app.yaml&lt;/code&gt; spec, I can declare the app components and store it within the project codebase. Soon though, I started to run into the problem of how to manage different environments for the same application (e.g. &lt;em&gt;review&lt;/em&gt;, &lt;em&gt;staging&lt;/em&gt; and &lt;em&gt;production&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;After unsuccessfully searching online for anything that would fit my use case, I figured I would solve the problem myself. I wanted a tool that would allow me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declare the different environments for a given App specification&lt;/li&gt;
&lt;li&gt;Have diff capabilities&lt;/li&gt;
&lt;li&gt;Deploy multiple apps at once&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After a couple of days of tinkering, I had an up and running the first version of &lt;code&gt;appfile&lt;/code&gt;. If you want to go straight to the code, check the repo at &lt;a href="https://github.com/renehernandez/appfile"&gt;renehernandez/appfile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ready? Ok, let's discuss what &lt;code&gt;appfile&lt;/code&gt; is all about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features #
&lt;/h2&gt;

&lt;p&gt;The main capabilities that I set out to have and are implemented as of the current version (&lt;code&gt;v0.0.2&lt;/code&gt;) are outlined below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declare the different environments for a given App specification&lt;/li&gt;
&lt;li&gt;Support templates to customize the final app specification based on the selected environment&lt;/li&gt;
&lt;li&gt;Have diff capabilities&lt;/li&gt;
&lt;li&gt;Deploy multiple apps at once&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CLI #
&lt;/h3&gt;

&lt;p&gt;The full CLI help can be seen by either typing on the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;appfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;appfile &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will output help information like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;appfile

Deploy app platform specifications to DigitalOcean

Usage: 
  appfile &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;

Available Commands: 
  destroy Destroy apps running &lt;span class="k"&gt;in &lt;/span&gt;DigitalOcean 
  diff Diff &lt;span class="nb"&gt;local &lt;/span&gt;app spec against app spec running &lt;span class="k"&gt;in &lt;/span&gt;DigitalOcean 
  &lt;span class="nb"&gt;help &lt;/span&gt;Help about any &lt;span class="nb"&gt;command 
  sync &lt;/span&gt;Sync all resources from app platform specs to DigitalOcean

Flags: 
  &lt;span class="nt"&gt;-t&lt;/span&gt;, &lt;span class="nt"&gt;--access-token&lt;/span&gt; string API V2 access token
  &lt;span class="nt"&gt;-e&lt;/span&gt;, &lt;span class="nt"&gt;--environment&lt;/span&gt; string root all resources from spec file &lt;span class="o"&gt;(&lt;/span&gt;default &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt;, &lt;span class="nt"&gt;--file&lt;/span&gt; string load appfile spec from file &lt;span class="o"&gt;(&lt;/span&gt;default &lt;span class="s2"&gt;"appfile.yaml"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt; &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;appfile
  &lt;span class="nt"&gt;--log-level&lt;/span&gt; string Set log level &lt;span class="o"&gt;(&lt;/span&gt;default &lt;span class="s2"&gt;"info"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt;, &lt;span class="nt"&gt;--version&lt;/span&gt; version &lt;span class="k"&gt;for &lt;/span&gt;appfile

Use &lt;span class="s2"&gt;"appfile [command] --help"&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;more information about a command.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The available sub-commands are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;appfile sync&lt;/code&gt;: Sync all resources from app platform specs to DigitalOcean&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;appfile diff&lt;/code&gt;: Diff local app spec against app spec running in DigitalOcean&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;appfile destroy&lt;/code&gt;: Destroy apps running in DigitalOcean&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Github Action #
&lt;/h3&gt;

&lt;p&gt;There is also a Github Action that you can use to automate the deployment of Apps to DigitalOcean with &lt;code&gt;appfile&lt;/code&gt;. Check the &lt;code&gt;action-appfile&lt;/code&gt; Action at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Marketplace: &lt;a href="https://github.com/marketplace/actions/github-action-for-appfile-cli"&gt;https://github.com/marketplace/actions/github-action-for-appfile-cli&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Repository URL: &lt;a href="https://github.com/renehernandez/action-appfile"&gt;https://github.com/renehernandez/action-appfile&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation #
&lt;/h2&gt;

&lt;p&gt;Currently, you would need to install &lt;code&gt;appfile&lt;/code&gt; by downloading a corresponding release from the &lt;a href="https://github.com/renehernandez/appfile/releases/latest"&gt;latest Github release&lt;/a&gt; for your platform of choice.&lt;/p&gt;

&lt;p&gt;For Mac:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://github.com/renehernandez/appfile/releases/latest/download/appfile_darwin_amd64
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x appfile_darwin_amd64
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; ./appfile_darwin_amd64 /usr/local/bin/appfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://github.com/renehernandez/appfile/releases/latest/download/appfile_linux_amd64
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x appfile_darwin_amd64
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; ./appfile_darwin_amd64 /usr/local/bin/appfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Windows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Invoke-WebRequest &lt;span class="nt"&gt;-Uri&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/renehernandez/appfile/releases/latest/download/appfile_windows_amd64.exe"&lt;/span&gt; &lt;span class="nt"&gt;-OutFile&lt;/span&gt; appfile.exe
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$env&lt;/span&gt;:Path +&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./appfile.exe"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Usage #
&lt;/h2&gt;

&lt;p&gt;Let's look at the following example to start seeing the power of &lt;code&gt;appfile&lt;/code&gt;. We want to deploy a Rails application to the DigitalOcean App Platform. This Rails app would have different components depending if we are deploying to production or a review environment.&lt;/p&gt;

&lt;p&gt;To start, we need to define our &lt;code&gt;appfile.yaml&lt;/code&gt; spec:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# appfile.yaml&lt;/span&gt;
&lt;span class="na"&gt;environments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;review&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./envs/review.yaml&lt;/span&gt;
  &lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./envs/production.yaml&lt;/span&gt;

&lt;span class="na"&gt;specs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./app.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above spec lays out that our App has 2 environments to get state values from: &lt;strong&gt;review&lt;/strong&gt; and &lt;strong&gt;production&lt;/strong&gt; from the &lt;code&gt;./envs/review.yaml&lt;/code&gt; and &lt;code&gt;./envs/production.yaml&lt;/code&gt; files respectively. It also defines that the App spec is located at &lt;code&gt;./app.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's take a look a the &lt;code&gt;app.yaml&lt;/code&gt; definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app.yaml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rails-app&lt;/span&gt; 
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt; 
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;repo_name&amp;gt;&lt;/span&gt; 
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;requiredEnv "IMAGE_TAG"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;instance_size_slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.rails.instance_slug&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;instance_count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.rails.instance_count&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- range $key&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$value&lt;/span&gt; &lt;span class="pi"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;= .Values.rails.envs&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;$key&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;$value&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt; 

&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if eq .Environment.Name "review"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;12.4'&lt;/span&gt;
  &lt;span class="na"&gt;internal_ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- range $key&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$value&lt;/span&gt; &lt;span class="pi"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;= .Values.postgres.envs&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;$key&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;$value&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;migrations&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;repo_name&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;requiredEnv "IMAGE_TAG"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- range $key&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$value&lt;/span&gt; &lt;span class="pi"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;= .Values.migrations.envs&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;$key&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;$value&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;

&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if eq .Environment.Name "production"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;databases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
  &lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;cluster_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mydatabase&lt;/span&gt;
  &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PG&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;12"&lt;/span&gt;
&lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the &lt;code&gt;app.yaml&lt;/code&gt; is leveraging templates to abstract the values that can change (e.g. &lt;code&gt;tag: {{ requiredEnv "IMAGE_TAG" }}&lt;/code&gt;), as well as, determining which components need to be deployed based on the environment (e.g. the usage of a &lt;code&gt;postgres&lt;/code&gt; container in review environments vs the usage of a managed database in production).&lt;/p&gt;

&lt;p&gt;Next, we define the values for each of the environments that are going to be merged with the &lt;code&gt;app.yaml&lt;/code&gt; to produce the final app specification. First, the values definition for the &lt;strong&gt;review&lt;/strong&gt; environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# review.yaml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-{{ requiredEnv "REVIEW_HOSTNAME" }}&lt;/span&gt;

&lt;span class="s"&gt;.common_envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;common_envs&lt;/span&gt;
  &lt;span class="na"&gt;DB_USERNAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;DB_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
  &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

&lt;span class="na"&gt;rails&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;instance_slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;basic-xxs&lt;/span&gt;
  &lt;span class="na"&gt;instance_count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*common_envs&lt;/span&gt;

&lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mydatabase&lt;/span&gt;
    &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;

&lt;span class="na"&gt;migrations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*common_envs&lt;/span&gt;&lt;span class="err"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And second, the values definition for the &lt;strong&gt;production&lt;/strong&gt; environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# production.yaml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-production&lt;/span&gt;

&lt;span class="s"&gt;.common_envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;common_envs&lt;/span&gt;
  &lt;span class="na"&gt;DB_USERNAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;DB_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strong_password&lt;/span&gt;
  &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

&lt;span class="na"&gt;rails&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;instance_slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;professional-xs&lt;/span&gt;
  &lt;span class="na"&gt;instance_count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*common_envs&lt;/span&gt;

&lt;span class="na"&gt;migrations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*common_envs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With all the required files in place, we can now proceed to deploy our app to DigitalOcean&lt;/p&gt;

&lt;p&gt;As a review environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ IMAGE_TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'fad7869fdaldabh23'&lt;/span&gt; &lt;span class="nv"&gt;REVIEW_HOSTNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'fix-bug'&lt;/span&gt; appfile &lt;span class="nb"&gt;sync&lt;/span&gt; &lt;span class="nt"&gt;--file&lt;/span&gt; /path/to/appfile.yaml &lt;span class="nt"&gt;--environment&lt;/span&gt; review
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would deploy a public Rails service, and internal Postgres service (the database running on a container) and would run the migration job. The final App spec to be synced to DigitalOcean would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# final app specification with review environment values&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-fix-bug&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rails-app&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;app-repo&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fad7869fdaldabh23&lt;/span&gt;
  &lt;span class="na"&gt;instance_size_slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;basic-xxs&lt;/span&gt;
  &lt;span class="na"&gt;instance_count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_PASSWORD&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_USERNAME&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RAILS_ENV&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;12.4'&lt;/span&gt;
  &lt;span class="na"&gt;internal_ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_DB&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mydatabase&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_PASSWORD&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_USER&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;migrations&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;migration-repo&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fad7869fdaldabh23&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_PASSWORD&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_USERNAME&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RAILS_ENV&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a production deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ IMAGE_TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'fad7869fdaldabh23'&lt;/span&gt; appfile &lt;span class="nb"&gt;sync&lt;/span&gt; &lt;span class="nt"&gt;--file&lt;/span&gt; /path/to/appfile.yaml &lt;span class="nt"&gt;--environment&lt;/span&gt; production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would deploy a public Rails service and a migration job. Both components would connect to an existing database. The final App spec to be synced to DigitalOcean would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# final app specification with production environment values&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-production&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rails-app&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;app-repo&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fad7869fdaldabh23&lt;/span&gt;
  &lt;span class="na"&gt;instance_size_slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;professional-xs&lt;/span&gt;
  &lt;span class="na"&gt;instance_count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_PASSWORD&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strong_password&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_USERNAME&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RAILS_ENV&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;migrations&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;registry_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOCR&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;migration-repo&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fad7869fdaldabh23&lt;/span&gt;
  &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_PASSWORD&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strong_password&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_USERNAME&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RAILS_ENV&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

&lt;span class="na"&gt;databases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
  &lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;cluster_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mydb&lt;/span&gt;
  &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PG&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;12"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Future steps #
&lt;/h2&gt;

&lt;p&gt;There are several areas where the tool could move forward in the future:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide packages for &lt;code&gt;homebrew&lt;/code&gt; and &lt;code&gt;chocolatey&lt;/code&gt; to ease the installation process in MacOS and Windows respectively.&lt;/li&gt;
&lt;li&gt;Providing a lint command, that would allow to validate the final spec without connecting to the DigitalOcean API. Usage would be: &lt;code&gt;appfile lint -f &amp;lt;appfile.yaml&amp;gt; -e &amp;lt;env_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Load App specs from a remote URL. That would be a first step towards a reusability of Apps in DigitalOcean and having access to pre-defined, customizable Apps&lt;/li&gt;
&lt;li&gt;Support secrets encryption through an integration with &lt;a href="https://github.com/mozilla/sops"&gt;sops&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion #
&lt;/h2&gt;

&lt;p&gt;Let's recap quickly the post. First, I talked about the DigitalOcean App platform and the obstacle of customizing the App specification to suit different environments requirements, resulting on the creation of &lt;a href="https://github.com/renehernandez/appfile"&gt;appfile&lt;/a&gt;. Next, I provided an overview of the tool, how to install it, main features and how to use. Finally, I mentioned some future ideas where I could see &lt;code&gt;appfile&lt;/code&gt; evolving to.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;appfile&lt;/code&gt; has been a very interesting project to work on for the past few days. I have learned a lot about the DigitalOcean API and the App Platform in particular. I see the value that it brings to developers and some of the directions where it could go in the future are pretty interesting.&lt;/p&gt;

&lt;p&gt;To conclude, thank you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!&lt;/p&gt;

</description>
      <category>appfile</category>
      <category>digitalocean</category>
      <category>apps</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Terraforming AWS VPC</title>
      <dc:creator>Rene Hernandez</dc:creator>
      <pubDate>Mon, 19 Oct 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/renehernandez/terraforming-aws-vpc-3im1</link>
      <guid>https://dev.to/renehernandez/terraforming-aws-vpc-3im1</guid>
      <description>&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html"&gt;AWS VPC&lt;/a&gt; stands for Virtual Private Cloud and it represents the networking layer for the AWS EC2 services (computing). In this post, we are going to cover how to automate the configuration of AWS VPC using &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;. If you are not familiar with Terraform, you can check my introductory post &lt;a href="https://bitsofknowledge.net/2020/09/30/first-steps-with-terraform-in-aws/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to go straight to the code, you can check it out at &lt;a href="https://github.com/renehernandez/aws-terraform-examples/tree/master/03-terraforming-aws-vpc"&gt;aws-terraform-examples/03-terraforming-aws-vpc&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Already back? Great! Let's dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key concepts #
&lt;/h2&gt;

&lt;p&gt;Before getting into the implementation, let's briefly mention some of the essential concepts around networking and VPC.&lt;/p&gt;

&lt;h4&gt;
  
  
  Availability Zones (AZ) and Regions #
&lt;/h4&gt;

&lt;p&gt;AWS has two main concepts to refer to the physical locations of their data centers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/"&gt;Availability Zone (AZ)&lt;/a&gt;: It represents one or more discrete data centers with redundant power, networking and connectivity in a particular AWS Region. All AZs are connected with high-bandwidth, low-latency networking with fully redundant and dedicated fiber communications, and traffic is encrypted.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/"&gt;Region&lt;/a&gt;: Consists of multiple, isolated and physically separate AZ's within a geographic area.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Virtual Private Cloud (VPC) #
&lt;/h4&gt;

&lt;p&gt;There are several key components in a VPC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html"&gt;Subnet&lt;/a&gt;: A range of IP addresses. Can be either public or private.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html"&gt;Route table&lt;/a&gt;: A set of rules that are used to determine where network traffic is directed. Each subnet is associated with a route table.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html"&gt;Internet Gateway&lt;/a&gt;: Allows enabling communication between resources in your VPC and the internet and it serves two main purposes:

&lt;ul&gt;
&lt;li&gt;Provides a target in your VPC route tables for internet-routable traffic.&lt;/li&gt;
&lt;li&gt;Performs network address translation (NAT) for instances that have been assigned public IPv4 addresses.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html"&gt;DNS in VPC&lt;/a&gt;: AWS provides an AWS Route53 Resolver to act as a DNS server for a given VPC. The key considerations are:

&lt;ul&gt;
&lt;li&gt;Public and private DNS hostnames are provided for corresponding IPv4 addresses for each instance.&lt;/li&gt;
&lt;li&gt;Managing DNS in the VPC is done through the attributes &lt;code&gt;enableDNSHostnames&lt;/code&gt; (indicates if instances with public IP addresses get corresponding DNS hostnames) and &lt;code&gt;enableDNSSupport&lt;/code&gt; (indicates whether DNS resolution is supported).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html"&gt;NAT&lt;/a&gt;: A Network Translation Gateway (NAT) allows instances in a private subnet to connect to the internet while preventing external agents from initiating a request to the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Terraform module #
&lt;/h4&gt;

&lt;p&gt;In this post, we are going to use a Terraform module to facilitate the VPC declaration. According to the &lt;a href="https://www.terraform.io/docs/modules/index.html"&gt;docs&lt;/a&gt;, a &lt;em&gt;module&lt;/em&gt; is a container for multiple resources that work together. &lt;em&gt;Modules&lt;/em&gt; can be used to create lightweight abstractions to describe infrastructure in terms of its architecture, rather than directly in terms of physical objects&lt;/p&gt;

&lt;p&gt;Uff!! After a heavy dose of concepts, let's get right into the implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation #
&lt;/h2&gt;

&lt;p&gt;The VPC declaration in Terraform is relatively short since we are leveraging the &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-vpc"&gt;VPC Module&lt;/a&gt;) maintained by the Terraform community, which comes packed with sane abstractions and useful defaults.&lt;/p&gt;

&lt;p&gt;To create the VPC, we need to obtain the existing Availability Zones in the current region. We can either manually specify the AZ names, or leverage instead the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones"&gt;aws_availability_zones data source&lt;/a&gt;. This will allow Terraform to dynamically obtain the list of AZ from the region configured in the provider.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_availability_zones" "available" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, we proceed to configure our new VPC as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "~&amp;gt; 2.0" name = "example" cidr = "10.0.0.0/16" azs = data.aws_availability_zones.available.names private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"] enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true tags = { Terraform = "true" Environment = "dev" }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's discuss some of the fields in the module declaration above:&lt;/p&gt;

&lt;h4&gt;
  
  
  cidr field #
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Represents the range of IPv4 addresses that are going to be available for this VPC. Generally should refer to IPs considered to be a private range (e.g. 10.0.0.0, 172.16.0.0 - 172.31.255.25, 192.168.0.0 - 192.168.255.255).&lt;/li&gt;
&lt;li&gt;By assigning &lt;code&gt;10.0.0.0/16&lt;/code&gt; to it, it means that the VPC will have access to 65536 (2^16) different IP addresses&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Subnets fields #
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Both &lt;code&gt;private_subnets&lt;/code&gt; and &lt;code&gt;public_subnets&lt;/code&gt; are specifying 3 different subnets to be created, each one with 256 (2^8) IP addresses allocated to it.&lt;/li&gt;
&lt;li&gt;For both subnets, the module will make sure that the appropriate route tables and NAT gateways are created and attached.&lt;/li&gt;
&lt;li&gt;For the public subnets, the module will wire them up with the Internet Gateway associated with the VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  NAT gateways fields #
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;enable_nat_gateway&lt;/code&gt; informs the module that NAT Gateways should be provisioned for the private networks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;single_nat_gateway&lt;/code&gt; informs the module that all private subnets will share a single NAT Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  enable_dns_hostnames field #
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Combined with &lt;code&gt;enable_dns_support&lt;/code&gt; (which is enabled by default), this informs the module to configure DNS resolution for the public IP addresses associated with instances in the VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Resources #
&lt;/h2&gt;

&lt;p&gt;Below, it is a condensed list of all the resources mentioned throughout the post as well as a few others I consider may be of interest to deepen your knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS VPC:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html"&gt;AWS VPC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html"&gt;Route tables&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html"&gt;Subnets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html"&gt;DNS in VPC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html"&gt;NAT in VPC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html"&gt;NAT Gateways&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html"&gt;Internet Gateways&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS AZ:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/"&gt;AZ and region&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Terraform:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/terraform-aws-modules/terraform-aws-vpc"&gt;VPC Module&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones"&gt;AZ Data Source&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion #
&lt;/h2&gt;

&lt;p&gt;Let's sum up what we discussed in this post. First, we looked at some of the basic concepts around AWS VPC and AZ. Next, we dove into how to declare the VPC by leveraging the VPC module from Terraform and analyzed some of the fields that were specified as part of the module declaration. Finally, we listed the online resources used to create the post in case you want to go deeper in the topic.&lt;/p&gt;

&lt;p&gt;Thank you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Terraforming DNS with AWS Route53</title>
      <dc:creator>Rene Hernandez</dc:creator>
      <pubDate>Fri, 09 Oct 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/renehernandez/terraforming-dns-with-aws-route53-2eea</link>
      <guid>https://dev.to/renehernandez/terraforming-dns-with-aws-route53-2eea</guid>
      <description>&lt;p&gt;This post is part of a series about Terraform and AWS. You can check the other ones at:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://bitsofknowledge.net/2020/09/30/first-steps-with-terraform-in-aws/"&gt;First steps with Terraform in AWS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Terraforming DNS with AWS Route53 (this post)&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html"&gt;AWS Route53&lt;/a&gt; is a DNS service used to perform three main functions: domain registration, DNS routing, and health checking. In this post, we are going to cover how to automate the configuration of AWS Route53 as your DNS service using &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;. If you are not familiar with Terraform, you can check my introductory post &lt;a href="https://bitsofknowledge.net/2020/09/30/first-steps-with-terraform-in-aws/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to go straight to the code, you can check it out at &lt;a href="https://github.com/renehernandez/aws-terraform-examples/tree/master/02-terraforming-dns-with-aws-route53"&gt;aws-terraform-examples/02-terraforming-dns-with-aws-route53&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Already back? Great! Let's dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic concepts #
&lt;/h2&gt;

&lt;p&gt;Before getting into the implementation, let's briefly mention some of the essential concepts around DNS and AWS Route53&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name servers: Servers in the DNS that help translate domain names (&lt;a href="http://www.example.com/"&gt;www.example.com&lt;/a&gt;) into IP addresses. They can be either &lt;a href="https://en.wikipedia.org/wiki/Domain_Name_System#DNS_resolvers"&gt;DNS resolver&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/Domain_Name_System#Authoritative_name_server"&gt;authoritative name servers&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Hosted zones: A container for DNS records, including information about how to route traffic for a domain (&lt;a href="http://example.com/"&gt;example.com&lt;/a&gt;) as well as its subdomains (&lt;a href="http://sub.example.com/"&gt;sub.example.com&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;DNS record: A particular entry in the hosted zone that specifies the traffic routing for the domain or subdomain.&lt;/li&gt;
&lt;li&gt;Time To Live (TTL): The amount of time, in seconds, that the DNS resolver should cache the values for a record before submitting another request to the authoritative name servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was just a general overview of the concepts that we are going to be leveraging for our infrastructure configuration. Let's move now to configure our DNS in AWS using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation #
&lt;/h2&gt;

&lt;p&gt;To fully work with the code examples in this section, it is recommended that you use a domain that you own since you would need to configure the AWS Route53 nameservers on your domain registrar settings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring zone and nameservers #
&lt;/h3&gt;

&lt;p&gt;The first step to configure the DNS service for your domain is to create the public hosted zone, which you can declare in Terraform as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_zone"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example.com"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As part of the creation of Route53 zones, the name server (NS) record, and the start of a zone of authority (SOA) record are automatically created by AWS. By using the &lt;code&gt;allow_overwrite&lt;/code&gt; option below, Terraform is then capable of managing them in a single execution without the need for a subsequent &lt;code&gt;terraform import&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_record"&lt;/span&gt; &lt;span class="s2"&gt;"nameservers"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;allow_overwrite&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example.com"&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"NS"&lt;/span&gt;
  &lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;zone_id&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name_servers&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After this, if you are using a domain registrar other than Route53, you will need to add the name servers associated with your zone in Route53 to the configuration settings on your domain registrar website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring email #
&lt;/h3&gt;

&lt;p&gt;Let's go through a possible email configuration as an example. We are going to be using &lt;a href="https://protonmail.com/"&gt;ProtonMail&lt;/a&gt; as our email server of choice.&lt;/p&gt;

&lt;h4&gt;
  
  
  Verification #
&lt;/h4&gt;

&lt;p&gt;First, we need to set up email verification. This is required by the email service to confirm that you own the domain. In the following example, we create a &lt;a href="https://en.wikipedia.org/wiki/TXT_record"&gt;TXT record&lt;/a&gt; to hold the verification value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_record"&lt;/span&gt; &lt;span class="s2"&gt;"protonmail_txt"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;zone_id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TXT"&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"protonmail-verification=&amp;lt;random_number&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;TXT&lt;/em&gt; record gets associated with the top-level domain in the zone by pointing to our previously created AWS Route53 zone using the &lt;code&gt;zone_id&lt;/code&gt; attribute and setting the &lt;code&gt;name&lt;/code&gt; attribute to empty. This effectively makes the &lt;em&gt;TXT&lt;/em&gt; record refer to &lt;code&gt;example.com&lt;/code&gt; in this scenario.&lt;/p&gt;

&lt;h4&gt;
  
  
  MX records #
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/MX_record"&gt;MX record&lt;/a&gt; specifies the mail server responsible to receive your domain's email.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_record"&lt;/span&gt; &lt;span class="s2"&gt;"protonmail_mx"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;zone_id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MX"&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"10 mail.protonmail.ch."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"20 mailsec.protonmail.ch."&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In &lt;em&gt;TXT&lt;/em&gt; record configuration above, we are associating two ProtonMail servers for email deliverability. A lower number means a higher priority, therefore emails will be sent first to &lt;code&gt;mail.protonmail.ch&lt;/code&gt; server and &lt;code&gt;mailsec.protonmail.ch&lt;/code&gt; as a fallback in case of failure. Similar to the &lt;em&gt;TXT&lt;/em&gt; record discussed before, the &lt;em&gt;MX&lt;/em&gt; record is associated with the top-level &lt;code&gt;example.com&lt;/code&gt; domain.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sender Policy Framework (SPF) #
&lt;/h4&gt;

&lt;p&gt;It is an authentication mechanism to validate that an email coming from a particular domain is being sent by an authorized IP address. For SPF, we need to use a &lt;em&gt;TXT&lt;/em&gt; record entry specifying that the ProtonMail servers are authorized to send the emails. Since we already have a record resource representing a &lt;em&gt;TXT&lt;/em&gt; record, we will reuse it and add a new entry in its &lt;code&gt;records&lt;/code&gt; attributes as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_record"&lt;/span&gt; &lt;span class="s2"&gt;"protonmail_txt"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TXT"&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"protonmail-verification=&amp;lt;random_number&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"v=spf1 include:_spf.protonmail.ch mx ~all"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  DomainKeys Identified Mail (DKIM) #
&lt;/h4&gt;

&lt;p&gt;It is another authentication technique that leverages cryptography to verify that email is sent by trusted servers. To manage the ProtonMail keys, we need to configure &lt;a href="https://en.wikipedia.org/wiki/CNAME_record"&gt;CNAME records&lt;/a&gt; to hold the public encryption keys. These keys will be used by the receiving servers to validate the emails, making sure that they haven't been tampered. Below, we define three new &lt;em&gt;CNAME&lt;/em&gt; records in Terraform, one for each public encryption key provided by the email service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_record"&lt;/span&gt; &lt;span class="s2"&gt;"protonmail_dkim_1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;zone_id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"protonmail._domainkey"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CNAME"&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"domain_key1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_record"&lt;/span&gt; &lt;span class="s2"&gt;"protonmail_dkim_2"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;zone_id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"protonmail2._domainkey"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CNAME"&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;domain_key2&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53_record"&lt;/span&gt; &lt;span class="s2"&gt;"protonmail_dkim_3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;zone_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53_zone&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;zone_id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"protonmail3._domainkey"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CNAME"&lt;/span&gt;
  &lt;span class="nx"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;
  &lt;span class="nx"&gt;records&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"domain_key3"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Gotcha #
&lt;/h3&gt;

&lt;p&gt;While reading several online guides, it is common to see the usage of &lt;code&gt;@&lt;/code&gt; as a hostname to refer to the top-level domain (in this post &lt;code&gt;example.com&lt;/code&gt;). In AWS Route53, that doesn't work and instead, it is necessary to set the &lt;code&gt;name&lt;/code&gt; attribute as empty to point to the root domain in the zone where the record is being added.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources #
&lt;/h2&gt;

&lt;p&gt;Below, it is a condensed list of all the resources mentioned throughout the post as well as a few others I consider may be of interest to deepen your knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DNS:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Domain_Name_System"&gt;DNS Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/MX_record"&gt;MX record&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/CNAME_record"&gt;CNAME record&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/route-53-concepts.html"&gt;Amazon Route53 DNS concepts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Route53:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html"&gt;Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/route-53-concepts.html"&gt;Concepts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Terraform:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_zone"&gt;AWS Route53 record&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record"&gt;AWS Route53 record&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Email:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Sender_Policy_Framework"&gt;Sender Policy Framework (SPF)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail"&gt;DomainKeys Identified Mail (DKIM)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://protonmail.com/support/knowledge-base/anti-spoofing/"&gt;ProtonMail Antispoofing&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusions #
&lt;/h2&gt;

&lt;p&gt;Let's briefly recap what we discussed in this post. First, we looked at some of the basic concepts around AWS Route53 and DNS. Next, we looked at how to attach a domain to AWS Route53 by using public hosted zone records and the nameservers. Afterward, we went through the steps of configuring an email service on our domain using different DNS records such as &lt;em&gt;MX&lt;/em&gt;, &lt;em&gt;TXT&lt;/em&gt; and &lt;em&gt;CNAME&lt;/em&gt;. Finally, I analyzed a common gotcha regarding the usage of &lt;code&gt;@&lt;/code&gt; as the hostname and listed common resources mentioned throughout the post.&lt;/p&gt;

&lt;p&gt;Thank you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>dns</category>
      <category>route53</category>
    </item>
    <item>
      <title>First steps with Terraform in AWS</title>
      <dc:creator>Rene Hernandez</dc:creator>
      <pubDate>Wed, 30 Sep 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/renehernandez/first-steps-with-terraform-in-aws-2gdg</link>
      <guid>https://dev.to/renehernandez/first-steps-with-terraform-in-aws-2gdg</guid>
      <description>&lt;p&gt;&lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; is a cloud-agnostic provisioning tool created by &lt;a href="https://www.hashicorp.com/"&gt;Hashicorp&lt;/a&gt;. It allows you manage your infrastructure in sane, safe and efficient manner by automating the proviisioning of your cloud resources (server, databases, DNS) in a declarative way, as well as leverage version control systems to keep track of the history of changes.&lt;/p&gt;

&lt;p&gt;In this post, we are going to go over how to setup Terraform to work with AWS. If you want to go straight to the code, you can check it out at &lt;a href="https://github.com/renehernandez/aws-terraform-examples/tree/master/setup"&gt;the setup example&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing terraform #
&lt;/h2&gt;

&lt;p&gt;There are different ways to install terraform depending on you operating system&lt;/p&gt;

&lt;h3&gt;
  
  
  Chocolatey on Windows #
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;choco &lt;span class="nb"&gt;install &lt;/span&gt;terraform
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Homebrew on OS X #
&lt;/h3&gt;

&lt;p&gt;Using the new hashicorp tap (recommended)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;hashicorp/tap/terraform
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Using the community tap&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;terraform
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Linux #
&lt;/h3&gt;

&lt;p&gt;Installing on Linux dependent on the distribution you are running. For Ubuntu/Debian:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add the HashiCorp gpg key
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://apt.releases.hashicorp.com/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add the HashiCorp repository
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-add-repository &lt;span class="s2"&gt;"deb [arch=amd64] https://apt.releases.hashicorp.com &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Update and install
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;terraform
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For more information and different ways to install, check the &lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli"&gt;installation pages&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting AWS with terraform #
&lt;/h2&gt;

&lt;p&gt;Now that we have the cli installed, let's get started connecting AWS with Terraform to manage the infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites #
&lt;/h3&gt;

&lt;p&gt;To execute the rest of the project, you'll need to follow the next steps or use your existing configured AWS account.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;An &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc"&gt;AWS account&lt;/a&gt; (a dedicated new one to execute terraform preferably, although it is OK to use your own AWS credentials for testing purposes)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"&gt;AWS CLI&lt;/a&gt; installed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure your local &lt;code&gt;aws cli&lt;/code&gt; with a dedicated &lt;code&gt;terraform&lt;/code&gt; [profile&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure &lt;span class="nt"&gt;--profile&lt;/span&gt; terraform
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Using the above command, input at the prompt your AWS Access Key ID and Secret Key&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining the AWS provider #
&lt;/h3&gt;

&lt;p&gt;According to the &lt;a href="https://www.terraform.io/docs/providers/index.html"&gt;Terraform documentation&lt;/a&gt;, a provider is essentially a plugin that offers a set of abstraction for certain API resources and their interactions. Usually, each provider focuses on a specific infrastructure platform.&lt;/p&gt;

&lt;p&gt;Terraform needs to know which provider to download from the &lt;a href="https://registry.terraform.io/"&gt;Terraform Registry&lt;/a&gt;. For that, we can use the &lt;code&gt;terraform&lt;/code&gt; block to list all the providers that the code will use. For our scenario, we can list the &lt;code&gt;aws&lt;/code&gt; provider as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 3.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;required_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;= 0.13"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we listed the &lt;code&gt;aws&lt;/code&gt; provider, let's add the configuration data for the &lt;code&gt;AWS&lt;/code&gt; provider. Below, we are using the &lt;em&gt;us-east-2&lt;/em&gt; region and loading the credentials to connect to &lt;code&gt;AWS&lt;/code&gt; from the terraform profile we created on the Prerequisites section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-2"&lt;/span&gt; 
  &lt;span class="nx"&gt;profile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For alternative means of authentication, dive in the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs"&gt;docs&lt;/a&gt; and always make sure that you don't store your &lt;code&gt;AWS&lt;/code&gt; credentials in plaintext on terraform.&lt;/p&gt;

&lt;p&gt;At this point, you are ready to use Terraform to automate your infrastructure. In the next section, we'll dive into storing the backend information remotely to facilitate team workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 for remote backend #
&lt;/h2&gt;

&lt;p&gt;First, I'll briefly mention some of the advantages of using a remote backend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improves working in a team since state can be protected with locks to avoid multiple interactions at the same time and possible corruptions.&lt;/li&gt;
&lt;li&gt;It helps keep sensitive data off disk. Terraform stores the state in plaintext and if using a backend it will only be stored on the backend location (e.g. S3 bucket)&lt;/li&gt;
&lt;li&gt;If the backend supports remote operations, it means that &lt;code&gt;terraform apply&lt;/code&gt; can be executed on the backend instead of your local machine, for an overall improved experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before going through the code, let's briefly look at what &lt;a href="https://www.terraform.io/docs/state/index.html"&gt;Terraform state&lt;/a&gt; entails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform state #
&lt;/h3&gt;

&lt;p&gt;Terraform stores the state of your infrastructure for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mapping to the Real World: Terraform maps each configuration to a corresponding resource in the real world and it leverages the state to verify that no two configurations elements represent the same endpoint&lt;/li&gt;
&lt;li&gt;Metadata: It includes tracking knowledge such as resource dependencies or workflows that are necessary for resources to work as expected.&lt;/li&gt;
&lt;li&gt;Performance: Optionally, the state can be considered as a source of truth and therefore API request to the providers are done only when specified. This helps improve performance for large teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This state is stored in a file, usually called &lt;code&gt;terraform.tfstate&lt;/code&gt; using a JSON format. Let's move on to the implementation&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation #
&lt;/h3&gt;

&lt;p&gt;To set up the remote backend, we need the following resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A S3 bucket where the terraform state will stored&lt;/li&gt;
&lt;li&gt;A DynamoDB table to lock the access to the state file&lt;/li&gt;
&lt;li&gt;IAM policies to give permission to the user to access the S3 bucket and the DynamoDB table (required if you are using an IAM user with more restrictive credentials than just &lt;code&gt;AdministratorAccess&lt;/code&gt; access)&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  S3 bucket #
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_state_storage"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-remote-state-storage-s3"&lt;/span&gt;
  &lt;span class="nx"&gt;acl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Terraform Storage"&lt;/span&gt;
    &lt;span class="nx"&gt;dedicated&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"infra"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;versioning&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;lifecycle&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;prevent_destroy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The S3 bucket as shown above, gets created with the following settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Private Access List Control (ACL) grant to limit access to the bucket.&lt;/li&gt;
&lt;li&gt;Versioning enabled to allow for state recovery in case of accidental deletion and file corruption&lt;/li&gt;
&lt;li&gt;Prevention of accidental destruction of the S3 bucket while running terraform operations&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  DynamoDB table #
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# create a dynamodb table for locking the state file.&lt;/span&gt;
&lt;span class="c1"&gt;# this is important when sharing the same state file across users&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_dynamodb_table"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_state_lock"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-lock"&lt;/span&gt;
  &lt;span class="nx"&gt;hash_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"LockID"&lt;/span&gt;
  &lt;span class="nx"&gt;read_capacity&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
  &lt;span class="nx"&gt;write_capacity&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
  &lt;span class="nx"&gt;attribute&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"LockID"&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"DynamoDB Terraform State Lock Table"&lt;/span&gt;
    &lt;span class="nx"&gt;dedicated&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"infra"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;lifecycle&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;prevent_destroy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The DynamoDB table gets configured with the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;LockID&lt;/strong&gt; hash key of type string, so that all items created by terraform operations are stored together in the same bucket&lt;/li&gt;
&lt;li&gt;The read and write capacity per seconds for the table. This specifies how read/write operations are we allowed to execute against the table&lt;/li&gt;
&lt;li&gt;Similar to the S3 bucket above, the resource is created with prevention of accidental destruction while running terraform operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  IAM policies #
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy_document"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_storage_state_access"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;actions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"s3:ListBucket"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_state_storage&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; 
  &lt;span class="nx"&gt;statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;actions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"s3:PutObject"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"${aws_s3_bucket.terraform_state_storage.arn}/terraform.tfstate"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Creates the IAM policy to allow access to the bucket&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_storage_state_access"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform_storage_state_access"&lt;/span&gt;
  &lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_iam_policy_document&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_storage_state_access&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Assigns the policy to the terraform user&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_user_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"terraform_storage_state_attachment"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform"&lt;/span&gt; &lt;span class="nx"&gt;policy_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_policy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_storage_state_access&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above IAM policy describes the permission for a user to access the S3 bucket:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ability to list the bucket&lt;/li&gt;
&lt;li&gt;Get objects from the bucket&lt;/li&gt;
&lt;li&gt;Add new objects to it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As part of the setup, the policy is attached to the terraform user, so it can have access to S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy_document"&lt;/span&gt; &lt;span class="s2"&gt;"dynamodb_access"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;actions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dynamodb:GetItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"dynamodb:PutItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"dynamodb:DeleteItem"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; 
    &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dynamodb:*:*:table/terraform-state-lock"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Creates the IAM policy to allow access to the dynamoDB&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy"&lt;/span&gt; &lt;span class="s2"&gt;"dynamodb_access"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dynamodb_access"&lt;/span&gt;
  &lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_iam_policy_document&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dynamodb_access&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Assigns the policy to the terraform user&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_user_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"dynamodb_attachment"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform_user&lt;/span&gt; &lt;span class="nx"&gt;policy_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_policy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dynamodb_access&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Similar to the previous IAM policy, this IAM policy describes the permission given to the user when accessing the DynamoDB table:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get items from the table&lt;/li&gt;
&lt;li&gt;Add items to the table&lt;/li&gt;
&lt;li&gt;Remove items from the table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The policy is then attached to the terraform user, so it can have access to the DynamoDB table.&lt;/p&gt;

&lt;h4&gt;
  
  
  Remote backend #
&lt;/h4&gt;

&lt;p&gt;Finally, to inform terraform to use the s3 bucket as remote backend, just add the following block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-storage"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-2"&lt;/span&gt;
    &lt;span class="nx"&gt;dynamodb_table&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-state-lock"&lt;/span&gt;
    &lt;span class="nx"&gt;profile&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Resources #
&lt;/h2&gt;

&lt;p&gt;Below, it is a condensed list of all the resources mentioned throughout the posts as well as a few others I consider may be of interest to deepen your knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Providers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/providers/index.html"&gt;Providers Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/configuration/provider-requirements.html"&gt;Provider Requirements&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/"&gt;Terraform Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs"&gt;AWS Provider Docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Terraform remote storage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/state/index.html"&gt;Terraform State&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/backends/types/s3.html"&gt;S3 backend&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/state/remote.html"&gt;Remote State&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@jessgreb01/how-to-terraform-locking-state-in-s3-2dc9a5665cb6"&gt;https://medium.com/@jessgreb01/how-to-terraform-locking-state-in-s3-2dc9a5665cb6&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;S3:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket"&gt;Terraform S3 resource&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/gsg/GetStartedWithS3.html"&gt;S3 AWS docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DynamoDB:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dynamodb_table"&gt;Terraform DynamoDB resource&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html"&gt;DynamoDB AWS docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IAM:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy"&gt;Terraform IAM policy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment"&gt;Terraform IAM policy attachment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document"&gt;Terraform IAM policy document&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html"&gt;IAM AWS docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusions #
&lt;/h2&gt;

&lt;p&gt;To sum up, this post provided an intro to Terraform in AWS. It covered how to install Terraform and configure it to manage AWS resources. As further steps, we went through the steps of configuring S3 and DynamoDB for managing the Terraform state remotely with locking included. This is a desired setup if working on a team. If you want to go straight to the code, go check it out at &lt;a href="https://github.com/renehernandez/aws-terraform-examples/tree/master/setup"&gt;renehernandez/aws-terraform-examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>infrastructure</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Automate changelog and releases creation in GitHub</title>
      <dc:creator>Rene Hernandez</dc:creator>
      <pubDate>Wed, 23 Sep 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/renehernandez/automate-changelog-and-releases-creation-in-github-3ajm</link>
      <guid>https://dev.to/renehernandez/automate-changelog-and-releases-creation-in-github-3ajm</guid>
      <description>&lt;p&gt;Keeping up to date the Changelog and generating GitHub releases is one of those tasks I always think it is important to do, but I feel it becomes a chore the more you have to do it manually on a given project. In one of my recent OSS projects, &lt;a href="https://github.com/renehernandez/camper"&gt;camper&lt;/a&gt;, I decided from the beginning that I didn't want to be manually generating the Changelog and the GitHub releases information.&lt;/p&gt;

&lt;p&gt;After researching different options, I landed on &lt;a href="https://github.com/github-changelog-generator/github-changelog-generator"&gt;github-changelog-generator&lt;/a&gt;. A neat project, it is a Ruby gem that allows you to automatically generate a changelog based on &lt;strong&gt;tags&lt;/strong&gt; , &lt;strong&gt;issues&lt;/strong&gt; and &lt;strong&gt;merged pull requests&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation #
&lt;/h2&gt;

&lt;p&gt;Since the project is hosted on GitHub, I went with &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; for the implementation as part of the CI process. It was an opportunity to put Actions in practice and get familiar with it. This post is not about an introduction to GitHub Actions, check instead the &lt;a href="https://docs.github.com/en/actions"&gt;Actions Docs&lt;/a&gt; for getting started and diving deep into the subject.&lt;/p&gt;

&lt;p&gt;Already back!! Great, let's go into the details.&lt;/p&gt;

&lt;p&gt;First let's discuss the main requirements that I had in mind:&lt;/p&gt;

&lt;p&gt;When committing to &lt;code&gt;main&lt;/code&gt; branch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A new updated Changelog should be generated and committed to &lt;code&gt;main&lt;/code&gt;. This would account for merged PRs, as well as any direct commit to &lt;code&gt;main&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;All latest changes that are not already part of the tagged released, should be grouped under an &lt;strong&gt;Unreleased&lt;/strong&gt; section at the top of the Changelog&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When pushing a new tag:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the Changelog moving all the unreleased changes under the new tag.&lt;/li&gt;
&lt;li&gt;Create a new &lt;a href="https://docs.github.com/en/github/administering-a-repository/about-releases"&gt;GitHub release&lt;/a&gt; containing all the information associated with the latest Changelog tagged entry.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CI - Changelog workflow #
&lt;/h3&gt;

&lt;p&gt;As I explained in the previous section, the Changelog update process consists of two parts. One when merging to &lt;code&gt;main&lt;/code&gt; and the other when pushing a new tag. The &lt;em&gt;CI - Changelog&lt;/em&gt; workflow, as shown below, fulfills the requirement of updating the &lt;strong&gt;Changelog&lt;/strong&gt; on every push to the &lt;code&gt;main&lt;/code&gt; branch. You can find the most up to date version at &lt;a href="https://github.com/renehernandez/camper/blob/main/.github/workflows/ci_changelog.yml"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI - Changelog

on:
  push:
    branches: [main]

jobs:
  changelog_prerelease:
    name: Update Changelog For Prerelease
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          ref: main
      - name: Update Changelog
        uses: heinrichreimer/github-changelog-generator-action@v2.1.1
        with:
          token: $
          issues: true
          issuesWoLabels: true
          pullRequests: true
          prWoLabels: true
          unreleased: true
          addSections: '{"documentation":{"prefix":" **Documentation:**","labels":["documentation"]}}'
      - uses: stefanzweifel/git-auto-commit-action@v4
        with:
          commit_message: Update Changelog for PR
          file_pattern: CHANGELOG.md

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It works as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It checks out the code using the &lt;a href="https://github.com/actions/checkout"&gt;action/checkout&lt;/a&gt; v2&lt;/li&gt;
&lt;li&gt;It proceeds to generate an update for the Changelog by using the &lt;a href="https://github.com/heinrichreimer/action-github-changelog-generator"&gt;heinrichreimer/github-changelog-generator-action&lt;/a&gt; action with the following customizations:

&lt;ul&gt;
&lt;li&gt;All closed issues should be part of the Changelog, including those without labels (&lt;code&gt;issues: true&lt;/code&gt; and &lt;code&gt;issuesWoLabels: true&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;All pull requests should be part of the Changelog, including those without labels (&lt;code&gt;pullRequests: true&lt;/code&gt; and &lt;code&gt;prWoLabels: true&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;It should group all latest changes under an &lt;strong&gt;Unreleased&lt;/strong&gt; section (&lt;code&gt;unreleased: true&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;It adds a new &lt;em&gt;Documentation&lt;/em&gt; section to group issues and pull requests with the &lt;code&gt;documentation&lt;/code&gt; label&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Then it commits the modified Changelog file back to main using the &lt;a href="https://github.com/stefanzweifel/git-auto-commit-action"&gt;stefanzweifel/git-auto-commit-action&lt;/a&gt; action&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Release workflow #
&lt;/h3&gt;

&lt;p&gt;The &lt;em&gt;Release&lt;/em&gt; workflow is a more complex pipeline since it not only updates the &lt;em&gt;Changelog&lt;/em&gt;, but also handles the publishing of a new gem version as well as a new GitHub release associated with the tag being pushed. For this post, we are only focusing on the Changelog and GitHub release related jobs. If you are interested, check the full workflow &lt;a href="https://github.com/renehernandez/camper/blob/main/.github/workflows/release.yml"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Release

on:
  push:
    tags:
      - v*

jobs:
# Other jobs
# ...
  changelog:
    name: Update Changelog
    runs-on: ubuntu-latest
    steps:
      - name: Get version from tag
        env:
          GITHUB_REF: $
        run: |
          export CURRENT_VERSION=${GITHUB_TAG/refs\/tags\/v/}
          echo "::set-env name=CURRENT_VERSION::$CURRENT_VERSION"
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          ref: main
      - name: Update Changelog
        uses: heinrichreimer/github-changelog-generator-action@v2.1.1
        with:
          token: $
          issues: true
          issuesWoLabels: true
          pullRequests: true
          prWoLabels: true
          addSections: '{"documentation":{"prefix":" **Documentation:**","labels":["documentation"]}}'
      - uses: stefanzweifel/git-auto-commit-action@v4
        with:
          commit_message: Update Changelog for tag $
          file_pattern: CHANGELOG.md

  release_notes:
    name: Create Release Notes
    runs-on: ubuntu-latest
    needs: changelog
    steps:
      - name: Get version from tag
        env:
          GITHUB_REF: $
        run: |
          export CURRENT_VERSION=${GITHUB_TAG/refs\/tags\/v/}
          echo "::set-env name=CURRENT_VERSION::$CURRENT_VERSION"

      - name: Checkout code
        uses: actions/checkout@v2
        with:
          ref: main

      - name: Get Changelog Entry
        id: changelog_reader
        uses: mindsers/changelog-reader-action@v1
        with:
          version: $
          path: ./CHANGELOG.md

      - name: Create Release
        id: create_release
        uses: actions/create-release@v1
        env:
          GITHUB_TOKEN: $ # This token is provided by Actions, you do not need to create your own token
        with:
          tag_name: $
          release_name: Release $
          body: $
          draft: false
          prerelease: false

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The first job, &lt;strong&gt;Update Changelog&lt;/strong&gt; , is almost the same as the one described on the previous section.The difference is that we are generating &lt;strong&gt;released&lt;/strong&gt; versions only and thus, there is no &lt;code&gt;unreleased: true&lt;/code&gt; entry.&lt;/p&gt;

&lt;p&gt;The second , &lt;strong&gt;Create Release Notes&lt;/strong&gt; , works as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It relies on the updated Changelog, thus the presence of &lt;code&gt;needs: changelog&lt;/code&gt; to force it to wait for the previous &lt;code&gt;changelog&lt;/code&gt; job completion.&lt;/li&gt;
&lt;li&gt;Using the &lt;a href="https://github.com/mindsers/changelog-reader-action"&gt;mindsers/changelog-reader-action&lt;/a&gt;, it proceeds to select the changelog entry associated with the tag being pushed.&lt;/li&gt;
&lt;li&gt;Using the &lt;a href="https://github.com/actions/create-release"&gt;actions/create-release&lt;/a&gt;, it generates the GitHub Release using the content of the changelog entry extracted on the previous step&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  GitHub action gotchas #
&lt;/h3&gt;

&lt;p&gt;The changelog generator action has some gotchas that are not easy to spot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The majority of options specified in the &lt;em&gt;Update Changelog&lt;/em&gt; step, such as &lt;code&gt;issues: true&lt;/code&gt; and &lt;code&gt;pullRequests: true&lt;/code&gt; default to &lt;code&gt;true&lt;/code&gt; on the underlying &lt;code&gt;github-changelog-generator&lt;/code&gt; gem, but are required as part of the action, otherwise they get set to &lt;code&gt;false&lt;/code&gt;. That tripped me over for a while, until I read the action's implementation, specifically the &lt;a href="https://github.com/heinrichreimer/action-github-changelog-generator/blob/master/entrypoint.sh#L39"&gt;entrypoint.sh&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Adding a new section using the &lt;code&gt;addSections&lt;/code&gt; field fails if you specify a prefix with multiple words (e.g, &lt;em&gt;Documentations updates&lt;/em&gt; as the &lt;a href="https://github.com/github-changelog-generator/github-changelog-generator/wiki/Advanced-change-log-generation-examples"&gt;changelog generator wiki&lt;/a&gt; suggests). The issue is with word splitting on the &lt;code&gt;entrypoint.sh&lt;/code&gt; as discussed in &lt;a href="https://github.com/heinrichreimer/action-github-changelog-generator/issues/3"&gt;issue#3&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Changelog generator limitation #
&lt;/h3&gt;

&lt;p&gt;While iterating on the output produced by the &lt;code&gt;changelog-generator&lt;/code&gt; gem, I realized that I was getting double entries between PRs that are linked to issues (i.e. PRs that close issues when merged). I dug in the documentation trying to find a way of just showing either the issue or PR to no avail. Then I posted an &lt;a href="https://github.com/github-changelog-generator/github-changelog-generator/issues/890"&gt;issue&lt;/a&gt; on the github repo and confirmed my suspictions that it is not currently a way to this, due to limitations on the GitHub REST API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions #
&lt;/h2&gt;

&lt;p&gt;In this post, we discussed how to automate Changelog and GitHub Releases creation. We went over the details of each of the workflows and describe the steps for each job involved in the workflows. We also mentioned some limitations and gotchas for the github actions and the &lt;code&gt;github-changelog-generator&lt;/code&gt; gem.&lt;/p&gt;

&lt;p&gt;To conclude, thanks you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!&lt;/p&gt;

</description>
      <category>changelog</category>
      <category>github</category>
      <category>releases</category>
      <category>actions</category>
    </item>
    <item>
      <title>Faster deployments of mysql databases in k8s</title>
      <dc:creator>Rene Hernandez</dc:creator>
      <pubDate>Tue, 11 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/renehernandez/faster-deployments-of-mysql-databases-in-k8s-204k</link>
      <guid>https://dev.to/renehernandez/faster-deployments-of-mysql-databases-in-k8s-204k</guid>
      <description>&lt;p&gt;Since I started at my new job, I have been inmersed in learning about a whole lot of new things, including Kubernetes and cloud. My first task, very challenging at the time, was to optimize certain part of our CI pipeline.&lt;/p&gt;

&lt;p&gt;What exactly? Read below to find out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem #
&lt;/h2&gt;

&lt;p&gt;As part of our CI pipeline, we provide our team with the ability to generate &lt;em&gt;reviews&lt;/em&gt; environments with a one-click deployment step. These review environments provide a production-like environment for developers to test their changes in the main application. The main difference between the review environments and production environment is that all the dependent systems are configured using containers running in the cluster and hold comparatively smaller datasets versus production.&lt;/p&gt;

&lt;p&gt;The problem we were facing at the time, was that review environments were taking a long time to start, almost reaching 10 minutes from when the developer started the deployment up until the application was running and ready to use. The &lt;strong&gt;culprit&lt;/strong&gt;? The &lt;em&gt;mysql&lt;/em&gt; container, which was taking the bulk of time loading a database dump on startup and thus the application was left hanging until the database was ready to use.&lt;/p&gt;

&lt;p&gt;With this problem at hand, we looked at several options to better the performance and finally settled on trying to eliminate the dump loading step as a runtime stage during the deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution #
&lt;/h2&gt;

&lt;p&gt;As mentioned above, we decided that the best approach would be to move the dump expansion out of the database initialization stage and instead, process the dump during the image build creation. That way, we would pay the price (in time spent expanding the dump and loading into the database container) just once.&lt;/p&gt;

&lt;p&gt;With this idea in mind, there was one requirement that we needed to comply with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updates of review environments reuse the same database pod to preserve custom data that developers may have created during their testing efforts. That meant, we should only copy the expanded database on the first startup.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Custom mysql container with loaded data #
&lt;/h3&gt;

&lt;p&gt;Our first attempt was to produce a custom mysql image with the dump already loaded in. After several iterations on how to achieve this, we finally settled on the following multi-stage Dockerfile shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mysql:5.7 as builder

RUN apt-get update &amp;amp;&amp;amp; apt-get upgrade -y

# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]

ARG dump
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD

ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD MYSQL_DATABASE=$MYSQL_DATABASE MYSQL_USER=$MYSQL_USER MYSQL_PASSWORD=$MYSQL_PASSWORD

COPY ${dump} /docker-entrypoint-initdb.d

# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]

FROM mysql:5.7

COPY --from=builder ./initialized-db /var/lib/mysql/

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As we can see, the first stage loads the database dump into mysql. Since we initially modify the entrypoint script for the image, it won't start the mysql process, just the initialization process of expanding the dump. Then, the second stage generates the final image, by copying the generated database files from the first stage to the &lt;code&gt;/var/lib/mysql&lt;/code&gt; folder in the second stage.&lt;/p&gt;

&lt;p&gt;For my initial testing with the resulting image, I was using a &lt;em&gt;docker-compose&lt;/em&gt; deployment to simulate the kubernetes environment and results were promising. The startup time for the mysql container had been reduced significantly and the application was able to connect to the backend successfully. Then, I went to try deploying this custom mysql image with the &lt;a href="https://github.com/helm/charts/tree/master/stable/mysql"&gt;mysql chart&lt;/a&gt; in kubernetes and it failed to start the corresponding mysql pod.&lt;/p&gt;

&lt;p&gt;The reason for this failure is due to the difference on how &lt;em&gt;docker&lt;/em&gt; treats volumes vs &lt;em&gt;kubernetes&lt;/em&gt;, which I wasn't aware beforehand. In Docker, if the volume is empty, then the data on the container (&lt;code&gt;/var/lib/mysql&lt;/code&gt; in this scenario), will be copied to the corresponding volume. On the other hand, Kubernetes will override whatever is in th container at the path where the &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/"&gt;Persistent Volume&lt;/a&gt; is being mounted, so I couldn't get the Mysql container up and running populated with the expanded data.&lt;/p&gt;

&lt;p&gt;After several iterations and analysis, I landed in a feasible solution using init containers, which I dive into below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Init container to load data #
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/"&gt;Init containers&lt;/a&gt; run before the main application containers start in a pod and they are usually used to perform one-off tasks required before the main application boots. Below, is the Dockerfile definition for the image that will run as init container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mysql:5.7 as builder

RUN apt-get update &amp;amp;&amp;amp; apt-get upgrade -y

# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]

ARG dump
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD

ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD MYSQL_DATABASE=$MYSQL_DATABASE MYSQL_USER=$MYSQL_USER MYSQL_PASSWORD=$MYSQL_PASSWORD

COPY ${dump} /docker-entrypoint-initdb.d

# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]

FROM alpine:3.7

RUN apk add --no-cache bash

COPY --from=builder ./initialized-db /mysql_data

WORKDIR /script

COPY scripts/copy_data_to_volume.sh ./

CMD ["bash", "copy_data_to_volume.sh"]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The main differences between this Dockerfile and the previous one are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instead of creating a mysql image, it creates an &lt;em&gt;alpine&lt;/em&gt; image running a bash script on startup&lt;/li&gt;
&lt;li&gt;It copies the mysql files after expanding the dump into a &lt;code&gt;/mysql_data&lt;/code&gt; folder instead of the mysql location in the mysql container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;em&gt;copy_data_to_volume.sh&lt;/em&gt; script, as shown below, takes care of copying the data from the &lt;code&gt;/mysql_data&lt;/code&gt; folder to a destination folder called &lt;code&gt;/initialized-db&lt;/code&gt;, as long as the &lt;code&gt;.data-initialized&lt;/code&gt; flag file is not present. This &lt;code&gt;/initialized-db&lt;/code&gt; folder should be mounted as the &lt;code&gt;data&lt;/code&gt; volume in the pod where the init container is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [[-f "/initialized-db/.data-initialized"]]; then
  echo "DATABASE already initialized. Nothing else left to do"
  exit 0
fi

echo "Copying seeded database (one-time operation)"
cp -a /mysql_data/. /initialized-db

echo "Create file to mark first-time initialization completed"
touch /initialized-db/.data-initialized

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The mysql chart supports specifying init containers through the &lt;code&gt;extraInitContainers&lt;/code&gt; field, as shown in the following code section. As I mentioned above, we are mounting the &lt;code&gt;data&lt;/code&gt; volume in the &lt;code&gt;/initialized-db&lt;/code&gt; folder within the init container. And this is what makes the process work. That same &lt;code&gt;data&lt;/code&gt; volume will be mounted by the main &lt;code&gt;mysql&lt;/code&gt; container and it will have the already present on startup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql:
  image: mysql
  imageTag: 5.7
  extraInitContainers: &amp;gt;-
    - name: seed-database
      image: 
      volumeMounts:
        - name: data
          mountPath: "/initialized-db"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Results #
&lt;/h3&gt;

&lt;p&gt;After deploying the new version of the helm chart supporting this feature, we saw a reduction of the startup time from around 10 minutes to less than 3 minutes, which means that we gave developers back 7 minutes of productivity :).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions #
&lt;/h2&gt;

&lt;p&gt;To sum up, we discussed how to improve the startup performance of the mysql chart deployment by off-loading the process of expanding the sql dumps into the database to a dedicated data image. We discussed 2 solution attempts, the first one being the deployment of a mysql container with the dump already expanded. That solution wasn't feasible in a kubernetes environment, and we moved to the second solution attempt, which reused the same idea of expanding the dump, but instead deployed it as part of an init container running on the mysql helm chart.&lt;/p&gt;

&lt;p&gt;To conclude, thanks you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Docs Experience - deploying Gitlab Wikis as mkdocs sites</title>
      <dc:creator>Rene Hernandez</dc:creator>
      <pubDate>Mon, 18 May 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/renehernandez/docs-experience-deploying-gitlab-wikis-as-mkdocs-sites-k1b</link>
      <guid>https://dev.to/renehernandez/docs-experience-deploying-gitlab-wikis-as-mkdocs-sites-k1b</guid>
      <description>&lt;p&gt;The past week, I have improved our documentation experience at work. The issues we have tackled recently ranged from implementing a search endpoint to scrape documentation from multiple different endpoints such as GitLab Wikis, mkdocs websites, among others. Being capable of processing documentation from different applications fits our goal of making easy for developers to write their docs, and we do this by meeting them where they write documention, instead of forcing them to follow specific patterns and guidelines on how to create documentation.&lt;/p&gt;

&lt;p&gt;The first documentation endpoint that I processed was our &lt;a href="https://docs.gitlab.com/ee/user/project/wiki/"&gt;Gitlab Wikis&lt;/a&gt;. When analyzing the layout of the wikis, I realize that it was going to take us more time to properly limit the scraping content to the GitLab Wiki and avoid processing other types of content in the GitLab website (e.g, code, issues, MR, etc.).&lt;/p&gt;

&lt;p&gt;After exploring several alternative, I settled on generating &lt;a href="https://github.com/mkdocs/mkdocs"&gt;mkdocs&lt;/a&gt; websites from Wiki content, mainly for the following reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Wiki's content is written in Markdown, which makes mkdocs a very good match to generate websites with.&lt;/li&gt;
&lt;li&gt;We were already deploying documentation written in Markdown as mkdocs sites for several internal projects.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wiki Releaser #
&lt;/h2&gt;

&lt;p&gt;To automate transforming a GitLab Wiki into a mkdocs website, I created a new GitLab project which hosts the logic to go from cloning the GitLab repo to trigger a deployment in our kubernetes infrastructure of the the generated mkdocs docker image.&lt;/p&gt;

&lt;p&gt;The repository contains only 3 files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;mkdocs.yaml&lt;/code&gt;: Configuration file used by mkdocs to produce the website.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Dockerfile&lt;/code&gt;: Specification to generate the resulting docker image with the built mkdocs website&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.gitlab-ci.yaml&lt;/code&gt;: Pipeline configuration used by GitLab to build and deploy changes to the Wikis as websites&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  mkdocs.yml config &lt;a href="//#mkdocs.yml-config"&gt;#&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Our mkdocs websites are built using the &lt;strong&gt;readthedocs&lt;/strong&gt; theme, with several css customizations that are included through the &lt;code&gt;override.css&lt;/code&gt; file. As part of the image build process (explained later), the &lt;code&gt;&amp;lt;SITE_NAME&amp;gt;&lt;/code&gt; holder is replaced by the actual name of the site to be deployed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;site_name: &amp;lt;SITE_NAME&amp;gt;
extra_css:
  - "override.css"
theme:
  name: readthedocs
  collapse_navigation: false
  hljs_languages:
    - yaml

markdown_extensions:
  - admonition
  - fenced_code
  - tables
  - toc:
      permalink: true

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Dockerfile #
&lt;/h3&gt;

&lt;p&gt;The Dockerfile below is built as part of the CI process (explained in the next section), using the invocation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --build-arg WIKI_FOLDER="$PROJECT_NAME.wiki" --build-arg SITE_NAME="$SITE_NAME" -t &amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:$TAG_REF .

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;which passes down the cloned wiki folder as the &lt;code&gt;WIKI_FOLDER&lt;/code&gt; argument and the &lt;code&gt;SITE_NAME&lt;/code&gt; variable as argument for the build image process.&lt;/p&gt;

&lt;p&gt;Important points in the Dockerfile definition:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As mentioned in the mkdocs section above, one step is to replace the &lt;code&gt;&amp;lt;SITE_NAME&amp;gt;&lt;/code&gt; holder with the &lt;code&gt;SITE_NAME&lt;/code&gt; argument in the &lt;code&gt;mkdocs.yml&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;mdkocs expects an &lt;code&gt;index.md&lt;/code&gt; file, which will be the root page of the website. GitLab Wikis have a &lt;code&gt;Home.md&lt;/code&gt; (or &lt;code&gt;home.md&lt;/code&gt;) page as the root instead, so the Dockerfile renames it during the build process.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.7.2 as build

RUN pip install mkdocs==1.1.1 &amp;amp;&amp;amp; mkdir /site

WORKDIR /site

ARG WIKI_FOLDER

ARG SITE_NAME

COPY $WIKI_FOLDER /site/docs

COPY override.css /site/docs

COPY mkdocs.yml mkdocs.yml

# Replace &amp;lt;SITE_NAME&amp;gt; holder with SITE_NAME argument value
RUN sed -i "s/&amp;lt;SITE_NAME&amp;gt;/${SITE_NAME}/g" mkdocs.yml

RUN if [-f ./docs/Home.md]; then mv ./docs/Home.md ./docs/index.md; else mv ./docs/home.md ./docs/index.md; fi

RUN mkdocs build

FROM nginx:1.17.2-alpine

COPY --from=build /site/site /usr/share/nginx/html

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  GitLab CI #
&lt;/h3&gt;

&lt;p&gt;As shown in the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; pipeline configuration below, our CI/CD process for the &lt;strong&gt;Wiki Releaser&lt;/strong&gt; project has the following requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is executed only when a trigger is received&lt;/li&gt;
&lt;li&gt;The trigger needs to provide &lt;code&gt;PROJECT_PATH&lt;/code&gt;, &lt;code&gt;PROJECT_NAME&lt;/code&gt;, &lt;code&gt;SITE_NAME&lt;/code&gt; variables, otherwise the build job fails:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PROJECT_PATH&lt;/code&gt;: Refers to the GitLab path of the project associated with the wiki in the format of &lt;code&gt;&amp;lt;group&amp;gt;/&amp;lt;project_name&amp;gt;&lt;/code&gt; (e.g &lt;code&gt;devs/hello_world&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PROJECT_NAME&lt;/code&gt;: Refers to the project name of the GitLab project.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SITE_NAME&lt;/code&gt;: Refers to the name that will be shown on the mkdocs site.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;It will perform a &lt;em&gt;shallow clone&lt;/em&gt; of the wiki repo (latest commit only and no tags) for performance reasons&lt;/li&gt;
&lt;li&gt;It will the use the &lt;code&gt;CI_JOB_TOKEN&lt;/code&gt; variable to authenticate against the wiki repo for the clone. This &lt;a href="https://docs.gitlab.com/ee/user/project/new_ci_build_permissions_model.html#job-token"&gt;token&lt;/a&gt; &lt;em&gt;provides the user read access to all projects that would be normally accessible to the user creating that job&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;It will generate docker images containing the resulting mkdocs website with the following naming pattern:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:$TAG_REF&lt;/code&gt; (&lt;code&gt;TAG_REF&lt;/code&gt; holds the commit SHA value)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:latest&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;We use &lt;a href="https://github.com/roboll/helmfile"&gt;helmfile&lt;/a&gt; to handle charts deployment to kubernetes
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include:
  - project: 'cicd'
    file: 'ci/helm_pipeline.yaml'

variables: &amp;amp;variables
  HELMFILE: docs/helmfile.yaml
  DISABLE_STAGING: 'true'
  HELMFILE_ENV: wiki
  K8S_ENV: &amp;lt;kubernetes_env&amp;gt;
  OTHER_K8S_ENV: -e TAG_REF=latest -e PROJECT_NAME=$PROJECT_NAME -e PROJECT_PATH=$PROJECT_PATH # Variables we pass down to our helmfile deployment logic

.trigger: &amp;amp;trigger
  only:
    refs:
     - triggers
  after_script:
    - echo "Triggered from $PROJECT_PATH wiki using $PROJECT_NAME project and $SITE_NAME for site"

build:
  extends: .build
  &amp;lt;&amp;lt;: *trigger
  image: &amp;lt;internal_docker_repo&amp;gt;/devops/ci-images/docker-with-git:latest
  script:
    - if [$PROJECT_PATH == ""]; then exit 1; fi
    - if [$PROJECT_NAME == ""]; then exit 1; fi
    - if ["$SITE_NAME" == ""]; then exit 1; fi
    - git clone --depth=1 --no-tags https://gitlab-ci-token:${CI_JOB_TOKEN}@&amp;lt;gitlab_url&amp;gt;/$PROJECT_PATH.wiki.git
    - TAG_REF=$(git -C ./$PROJECT_NAME.wiki rev-parse HEAD)
    - docker build --build-arg WIKI_FOLDER="$PROJECT_NAME.wiki" --build-arg SITE_NAME="$SITE_NAME" -t &amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:$TAG_REF .
    - docker push &amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:$TAG_REF
    - docker tag &amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:$TAG_REF &amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:latest
    - docker push &amp;lt;internal_docker_repo&amp;gt;/devops/wiki-releaser/$PROJECT_PATH:latest

# Deployment to kubernetes
deploy:
  &amp;lt;&amp;lt;: *trigger
  environment:
    name: $PROJECT_PATH
    url: https://$PROJECT_NAME.docs.domain
  variables:
    &amp;lt;&amp;lt;: *variables

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Triggers #
&lt;/h3&gt;

&lt;p&gt;As I mentioned in the previous section, the GitLab pipeline is only executed with it is invoked by an incoming trigger from another repo that wants to build its wiki. For it to happen, we need to set up some configuration options in each of the repos.&lt;/p&gt;

&lt;h4&gt;
  
  
  Wiki Releaser #
&lt;/h4&gt;

&lt;p&gt;We configure a &lt;a href="https://docs.gitlab.com/ee/ci/triggers/#adding-a-new-trigger"&gt;Pipeline Trigger&lt;/a&gt; and we use the generated token as the to allow access to the dependent projects that are invoking this pipeline via the trigger.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dependent projects #
&lt;/h4&gt;

&lt;p&gt;On projects that want to invoke &lt;strong&gt;Wiki Releaser&lt;/strong&gt; to generate mkdocs websites for their wikis, we configure a &lt;a href="https://docs.gitlab.com/ee/user/project/integrations/webhooks.html"&gt;webhook&lt;/a&gt;. This webhook will only be triggered for &lt;code&gt;Wiki Page&lt;/code&gt; events and the hook will be a URL to invoke the &lt;strong&gt;Wiki Releaser&lt;/strong&gt; pipeline, passing along with it, the token specified on the trigger definition in the &lt;strong&gt;Wiki Releaser&lt;/strong&gt; project, plus all the required variables (i.e. &lt;code&gt;PROJECT_NAME&lt;/code&gt;, &lt;code&gt;PROJECT_PATH&lt;/code&gt; and &lt;code&gt;SITE_NAME&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion #
&lt;/h2&gt;

&lt;p&gt;Let's recap the content of the post. First, I talked a bit about the importance of documentation and how in my company, we try to follow developers practices instead of enforcing new ones. Then, I dove into the main problem which was how to analyze and extract data from GitLab Wikis and why I set on using &lt;strong&gt;mkdocs&lt;/strong&gt; to generate websites as a solution. Finally, I introduced the &lt;strong&gt;Wiki Releaser&lt;/strong&gt; project, what its different components are and their purposes, and how the triggers tie everything together.&lt;/p&gt;

&lt;p&gt;To conclude, thanks you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>gitlab</category>
      <category>wiki</category>
      <category>mkdocs</category>
    </item>
  </channel>
</rss>
