<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tomwerneruk</title>
    <description>The latest articles on DEV Community by tomwerneruk (@tomwerneruk).</description>
    <link>https://dev.to/tomwerneruk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tomwerneruk"/>
    <language>en</language>
    <item>
      <title>Awesome HTTP Load Balancing on Docker with Traefik</title>
      <dc:creator>tomwerneruk</dc:creator>
      <pubDate>Thu, 20 Jun 2019 14:00:00 +0000</pubDate>
      <link>https://dev.to/tomwerneruk/awesome-http-load-balancing-on-docker-with-traefik-4694</link>
      <guid>https://dev.to/tomwerneruk/awesome-http-load-balancing-on-docker-with-traefik-4694</guid>
      <description>&lt;h3&gt;
  
  
  Why Traefik?
&lt;/h3&gt;

&lt;p&gt;Traefik is the up-and-coming 'Edge Router / Proxy' for all things cloud. Full disclosure, I like it.&lt;/p&gt;

&lt;p&gt;The cut back features compared to products like F5 which I have used throughout my career is refreshing - these products still do have their place, and they can do some &lt;strong&gt;very&lt;/strong&gt; cool stuff.&lt;/p&gt;

&lt;p&gt;I'm a strong believer in avoiding technical debt when I'm building out my infrastructure and applications. Traefik is a production scale tool, while still being nimble enough to run in the smallest of deployments. There is very few reasons why you shouldn't consider incorporating it into your stack now - especially if you are self-hosting and don't have something like AWS ELB available to you, or if Traefik has a killer feature you need!&lt;/p&gt;

&lt;h2&gt;
  
  
  This Tutorial
&lt;/h2&gt;

&lt;p&gt;Traefik supports multiple orchestrators (Docker, Kubernetes, ECS to name a few), however, for this tutorial I am going to cover Docker Swarm. Check out &lt;em&gt;&lt;a href="https://fluffycloudsandlines.blog/using-traefik-on-docker-swarm/why-you-need-single-node-swarm" rel="noopener noreferrer"&gt;Why Single Node Swarm&lt;/a&gt;&lt;/em&gt; to see why I think you should be using Swarm over vanilla docker, even locally.&lt;/p&gt;

&lt;p&gt;In case you didn't get the message above, I am going to cover a scale-ready configuration for Traefik here. This means HA clustered support, meaning that as you add nodes, your Traefik service will scale with you. Technical debt to a minimum!&lt;/p&gt;

&lt;p&gt;What do I want to achieve?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I host a VPS server that hosts multiple websites, running multiple technologies. Thankfully they are all &lt;em&gt;now&lt;/em&gt; containerised &lt;em&gt;(it wasn't pretty).&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;I want TLS certificates by default on every URL.&lt;/li&gt;
&lt;li&gt;I want to future proof this if I need to scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Treafik hits all of these points. Let's dive in....&lt;/p&gt;

&lt;p&gt;Our HA Docker Swarm Ready solution has a few moving parts;&lt;/p&gt;

&lt;h3&gt;
  
  
  Traefik Container
&lt;/h3&gt;

&lt;p&gt;This container will be scheduled as a &lt;code&gt;global&lt;/code&gt; service in our Swarm. This means that for every Host in our Docker Swarm cluster, one instance of Traefik will be deployed. This will be our traffic workhourse. In most scenarios this should have more than sufficient throughput. If a given Traefik instance is getting saturated, then you might be getting to the point where you should be horizontally scaling (more Hosts, not more CPUs).&lt;/p&gt;

&lt;h3&gt;
  
  
  Consul Container
&lt;/h3&gt;

&lt;p&gt;Consul is used in this scenario as a configuration store. Traefik needs a repository for config data and certificates which is accessible from all nodes in the cluster. Consul has other features - it overlaps a lot with Swarm's service mesh - but these don't need to be configured for this use case. This will be deployed in an HA manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traefik Init Container
&lt;/h3&gt;

&lt;p&gt;This is used to 'seed' our Traefik config into the Consul cluster when first starting up. Once this has done it's job, it is shutdown, and our scaled Traefik Container nodes take over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overall Traefik Cluster Design
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2019%2F06%2Fdocker-swarm-traefik-ha--1-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2019%2F06%2Fdocker-swarm-traefik-ha--1-.png"&gt;&lt;/a&gt;Overall Docker Swarm Traefik HA Cluster&lt;/p&gt;

&lt;p&gt;Our cluster is comprised of three docker swarm nodes, the recommended minimum number required for the raft consensus algorithm to provide resilience to node failure. There is no pre-requisite for this deployment except cluster communications working properly between nodes (as detailed in the Swarm setup guide - &lt;a href="https://docs.docker.com/engine/swarm/swarm-tutorial/" rel="noopener noreferrer"&gt;https://docs.docker.com/engine/swarm/swarm-tutorial/&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The Terraform and Compose files for this tutorial are located on my GitLab - &lt;a href="https://gitlab.com/fluffy-clouds-and-lines/traefik-on-docker-swarm.git" rel="noopener noreferrer"&gt;https://gitlab.com/fluffy-clouds-and-lines/traefik-on-docker-swarm.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The solution shall deploy 4 services;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Traefik node on each manager host,&lt;/li&gt;
&lt;li&gt;A Consul node on each manager host (maximum 3),&lt;/li&gt;
&lt;li&gt;'whoami' to facilitate testing,&lt;/li&gt;
&lt;li&gt;A standalone Traefik config generator.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Inside Traefik
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2019%2F06%2Finside-traefik--1-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2019%2F06%2Finside-traefik--1-.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Traefik has two main configuration areas; frontends and backends. Frontends respond to request from the outside world. This is where, if used, TLS is applied to requests. Backends group one or more containers to serve a frontend.&lt;/p&gt;

&lt;p&gt;In this deployment, frontends are created dynamically as a result of the docker labels declared on the whoami service (the backend group).&lt;/p&gt;

&lt;h3&gt;
  
  
  Networking
&lt;/h3&gt;

&lt;p&gt;Traefik and Consul shall expose their management UI ports in &lt;code&gt;host&lt;/code&gt; mode. This means they will be exposed on each host they are deployed on. Only hosts that have these services deployed shall have 8080 and 8500 available.&lt;/p&gt;

&lt;p&gt;In addition, Traefik shall expose the front-end ports of 80 and 443 on the ingress service mesh. This means that any Docker node will have ports 80 and 443, and will be routed internally by Docker to _ &lt;strong&gt;a&lt;/strong&gt; _ Traefik instance, but not necessarily the one running on that node. This is to facilitate fail over in the case of Traefik container failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the Traefik Cluster
&lt;/h2&gt;

&lt;p&gt;To deploy our stack we are going to use a Docker Compose file to define our services. Once deployed the Traefik instance will poll Docker (specifically the docker socket) for changes to deployed containers. Whenever a new container with appropriate metadata is started, Traefik will beginning routing traffic to it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment Setup
&lt;/h3&gt;

&lt;p&gt;A 3 node Docker Swarm deployment is required. If you don't have this, a Terraform manifest to deploy to AWS is available in this tutorial's repo.&lt;/p&gt;

&lt;p&gt;You will need a DNS record that points to your Swarm cluster for the test service. I have chosen &lt;code&gt;whoami2.docker.lab.infra.tomwerner.me.uk&lt;/code&gt; but this will need changing to a domain name you own. For maximum availability this should resolve to all of your Swarm nodes (but in theory only one is required due to the Traefik frontend being on the ingress mesh). I have achieved this by creating duplicate A records, resolving to each IP.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Compose File
&lt;/h3&gt;

&lt;p&gt;The full compose file is available in my GitLab repo, but breaking down the deployment;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  consul:
    image: consul
    command: agent -server -bootstrap-expect=3 -ui -client 0.0.0.0 -retry-join consul
    volumes:
      - consul-data:/consul/data
    environment:
      - CONSUL_LOCAL_CONFIG={"datacenter":"eu_west2","server":true}
      - CONSUL_BIND_INTERFACE=eth0
      - CONSUL_CLIENT_INTERFACE=eth0
    ports:
      - target: 8500
        published: 8500
        mode: host
    deploy:
      replicas: 3
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
    networks:
      - traefik
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our Consul deployment is fairly out of the box. It ensures that;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;consul is started in server mode, &lt;/li&gt;
&lt;li&gt;3 servers are pre-defined as our required amount of nodes before a cluster election can be triggered, &lt;/li&gt;
&lt;li&gt;that UI traffic can come from all sources,&lt;/li&gt;
&lt;li&gt;that the remaining nodes can be contacted via the alias consul. This is a bit of 'trick' - normally this would be a list of IP addresses, however, using Docker's internal round robin DNS resolution, this means that with repeated attempts, all 3 nodes are returned to trigger an election.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  traefik_init:
    image: traefik:1.7
    command:
      - "storeconfig"
      - "--loglevel=debug"
      - "--api"
      - "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
      - "--entrypoints=Name:https Address::443 TLS TLS.SniStrict:true TLS.MinVersion:VersionTLS12"
      - "--defaultentrypoints=http,https"
      - "--acme"
      - "--acme.storage=traefik/acme/account"
      - "--acme.entryPoint=https"
      - "--acme.httpChallenge.entryPoint=http"
      - "--acme.onHostRule=true"
      - "--acme.onDemand=false"
      - "--acme.email=hello@tomwerner.me.uk"
      - "--docker"
      - "--docker.swarmMode"
      - "--docker.domain=docker.lab.infra.tomwerner.me.uk"
      - "--docker.watch"
      - "--consul"
      - "--consul.endpoint=consul:8500"
      - "--consul.prefix=traefik"
      - "--rest"
    networks:
      - traefik
    deploy:
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
    depends_on:
      - consul
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is our 'bootstrap' config for Traefik. This service only starts upon deployment. It defines the configuration for the deployment, including enabling Lets Encrypt certificates, enables the Docker provider and defines we will be using Consul as our Key-Value store.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  traefik:
    image: traefik:1.7
    depends_on:
      - traefik_init
      - consul
    command:
      - "--consul"
      - "--consul.endpoint=consul:8500"
      - "--consul.prefix=traefik"
    networks:
      - traefik
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - target: 80
        published: 80
      - target: 443
        published: 443
      - target: 8080
        published: 8080
        mode: host 
    deploy:
      placement:
        constraints:
          - node.role == manager
      mode: global
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Traefik service is again fairly out of the box. All configuration is pulled from consul (which is seeded by the traefik_init service). This container is deployed on all manager nodes at present, but this constraint is in place as Treafik requires access to data on the manager node to operate. Consider the decoupling options at the end of this post.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  whoami0:
    image: containous/whoami
    networks: 
      - traefik
    deploy:
      replicas: 6
      labels:
        traefik.enable: "true"
        traefik.frontend.rule: 'Host: whoami2.docker.lab.infra.tomwerner.me.uk'
        traefik.port: 80
        traefik.docker.network: 'traefik_traefik' 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our last service - whoami. This allows us to test the deployment. The labels defined here are used by Traefik to configure itself. &lt;code&gt;Host: whoami2.docker.lab.infra.tomwerner.me.uk&lt;/code&gt; should be changed to match a valid DNS hostname for your environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networks:
  traefik:
    driver: overlay
    attachable : true

volumes:
  consul-data:
      driver: local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally our network and storage definition. The Traefik network is an &lt;code&gt;overlay&lt;/code&gt; network to allow it to span the Swarm Cluster. It is also attachable to allow standalone containers to be attached to allow debugging (the default if &lt;code&gt;false&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Our consul data is persisted, as not all of it is easily recreated (i.e our issued certificates).&lt;/p&gt;

&lt;h3&gt;
  
  
  Check our deployment
&lt;/h3&gt;

&lt;p&gt;Assuming you have a working cluster already, deploy the cluster after checking out the repo;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ docker stack deploy -c traefik_noprism.yml traefik&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Verify the deployment;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker service ls
NAME MODE REPLICAS IMAGE                     
traefik_consul replicated 3/3 consul:latest              
traefik_traefik global 3/3 traefik:1.7               
traefik_traefik_init replicated 0/1 traefik:1.7                
traefik_whoami0 replicated 6/6 containous/whoami:latest   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The consul cluster election can take a minute or so. Monitor the Traefik and Consul logs to observe the progress of this. Navigating to the UI and having the consul and docker provider tabs available is a good indicator that this deployment has worked.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test the deployment
&lt;/h3&gt;

&lt;p&gt;Browse to any of your node's publicly accessible addresses to view the Traefik UI;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;public ip&amp;gt;:8080/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2019%2F06%2Fimage-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2019%2F06%2Fimage-3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the Docker tab, the details of our test service should show.&lt;/p&gt;

&lt;p&gt;Browse to your service hostname (in my case whoami2.docker.lab.infra.tomwerner.me.uk). This should resolve to one of your Swarm nodes and hand it off to a Traefik node. Refresh a few times to see it hit your chosen node under the Health tab (or open up all management UIs for each node).&lt;/p&gt;

&lt;p&gt;That's it for the tutorial!&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Steps
&lt;/h2&gt;

&lt;p&gt;All tutorials are a basis to start with. A couple of things to consider taking further;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applying ACLs to restrict UI access to Consul (source IP and authentication),&lt;/li&gt;
&lt;li&gt;Restricting access to the Traefik management UI, either by disabling it or firewall (i.e AWS VPC security group access from bastion only),&lt;/li&gt;
&lt;li&gt;Docker Socket security...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're concerned around exposing a container to the internet which is bound to the Docker socket (a potential security issue), you may want to consider;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploying traefik-prism as per &lt;a href="https://dev.to/tomwerneruk/hardening-traefik-when-using-the-docker-provider-55f3"&gt;my previous post,&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Using TCP to communicate to the Docker socket,&lt;/li&gt;
&lt;li&gt;Consider Traefik Enterprise Edition if you are in a commercial environment (as this supports splitting configuration and routing roles).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As ever, comments and questions are welcomed below.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>cloud</category>
      <category>hashicorp</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying HA WordPress on AWS using Terraform and Salt</title>
      <dc:creator>tomwerneruk</dc:creator>
      <pubDate>Tue, 21 May 2019 13:01:00 +0000</pubDate>
      <link>https://dev.to/tomwerneruk/deploying-ha-wordpress-on-aws-using-terraform-and-salt-4144</link>
      <guid>https://dev.to/tomwerneruk/deploying-ha-wordpress-on-aws-using-terraform-and-salt-4144</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3c3zlyH1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1515646557491-e2615816e7f8%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3c3zlyH1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.unsplash.com/photo-1515646557491-e2615816e7f8%3Fixlib%3Drb-1.2.1%26q%3D80%26fm%3Djpg%26crop%3Dentropy%26cs%3Dtinysrgb%26w%3D1080%26fit%3Dmax%26ixid%3DeyJhcHBfaWQiOjExNzczfQ" alt="Deploying HA WordPress on AWS using Terraform and Salt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Love it or hate it WordPress is still around to stay. It is still the go-to for creating websites, thanks to a massive amount of plugins, themes and experience floating around for it.&lt;/p&gt;

&lt;p&gt;While trying to learn anything I have found scripted deployments a great way to learn a product from a different perspective, requiring deeper thought to all the moving parts. This is the first of several posts around automating AWS deployment. I have experience with Cloudformation, but in the quest to always widen my skillset, I have started to work with Terraform instead. I will be using Salt to automate the deployment of WordPress onto the AWS infrastructure.&lt;/p&gt;

&lt;p&gt;This article will get you up and running with a High Availability (HA) deployment of WordPress, ready for the next big thing™. It assumes some familiarity with AWS and WordPress.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design is all about trade-offs
&lt;/h2&gt;

&lt;p&gt;Designing IT Architecture is definitely not black and white, it is a complicated hairball of assumptions, estimates, previous experience and constraints (knowledge, financial and environmental to name a few). Beyond the simplest of design briefs, give two architects the same brief, and you will get two different designs. All design is inherently opinionated.&lt;/p&gt;

&lt;p&gt;What does this have to do with our WordPress HA solution? It sounds simple on the surface but is it?...&lt;/p&gt;

&lt;p&gt;For example, what would you mean by HA? Stays up during maintenance? Resilient to AWS failure? Resilient to WordPress issue (bad plugin, malformed update)?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are your constraints?&lt;/strong&gt; How much is your monthly spend budget? What expertise do you have to keep this solution fed and watered? Realistically, how much downtime can you tolerate? (it might be more than you think).&lt;/p&gt;

&lt;p&gt;Out of the above, how much is certain? how much is an estimate (informed or guess)? how much do you &lt;em&gt;think&lt;/em&gt; you need?&lt;/p&gt;

&lt;p&gt;When approaching this project, I tried to break down each functional component and evaluate the best AWS product for the task. I am trying to evaluate each option based on the &lt;a href="https://aws.amazon.com/blogs/apn/the-5-pillars-of-the-aws-well-architected-framework/"&gt;AWS Well Architected Framework Pillars&lt;/a&gt;, not just what 'sounds right'.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;System Component&lt;/th&gt;
&lt;th&gt;Low-Fi Solution&lt;/th&gt;
&lt;th&gt;Generally used Solution&lt;/th&gt;
&lt;th&gt;'Premium' / Exceptional Requirements Solution&lt;/th&gt;
&lt;th&gt;What would I choose?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Edge&lt;/td&gt;
&lt;td&gt;DNS Round Robin (Route53) + Health Monitor&lt;/td&gt;
&lt;td&gt;AWS ELB (Application Load Balancer)&lt;/td&gt;
&lt;td&gt;AWS ELB (Classic or ALB)&lt;/td&gt;
&lt;td&gt;For most sites, AWS ELB in Application Mode. It is cost-effective, scales well, requires minimal integration effort and is conceptually well understood. Classic may be required where throughput is required at all costs. Round robin would be required when non-TCP traffic needs to be balanced.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Compute&lt;/td&gt;
&lt;td&gt;EC2 Instances&lt;/td&gt;
&lt;td&gt;EC2 Instances in Auto Scaling Group&lt;/td&gt;
&lt;td&gt;EC2 Instances in Auto Scaling Group&lt;/td&gt;
&lt;td&gt;Using an Auto Scaling group is a no-brainer here. The extra learning curve is worth the operational convenience. It won't attract additional cost unless mis-configured.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database&lt;/td&gt;
&lt;td&gt;EC2 hosted MySQL Instance&lt;/td&gt;
&lt;td&gt;RDS Multi AZ deployment&lt;/td&gt;
&lt;td&gt;RDS Multi AZ + cross region read replica&lt;/td&gt;
&lt;td&gt;I would stump for a Multi AZ Deployment here. If you need to be tolerant of AWS Region failure, then you will need to consider cross-region read replicas (with a custom failover mechanism to promote a read replica and amending the WordPress configuration).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WordPress Root Storage&lt;/td&gt;
&lt;td&gt;N/A, bake into AMI&lt;/td&gt;
&lt;td&gt;Push / Pull from S3&lt;/td&gt;
&lt;td&gt;EFS&lt;/td&gt;
&lt;td&gt;EFS all the way. Higher cost but operationally slicker and less prone to errors. Syncing to S3 using cronjob or similar could have strange concurrency issues if filesystem updates occur on multiple WordPress hosts within a short time frame. Cost-efficient when considering operational advantage, especially when using the IA storage class.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Object Storage&lt;/td&gt;
&lt;td&gt;No object storage, just serve static resource from EC2 instances&lt;/td&gt;
&lt;td&gt;S3 + Cloudfront&lt;/td&gt;
&lt;td&gt;S3 + Cloudfront&lt;/td&gt;
&lt;td&gt;S3 all the way here to allow media to then be surfaced via Cloudfront Delivery Network.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Terraform Quickstart
&lt;/h2&gt;

&lt;p&gt;Terraform is delightfully simple to get started with. It is simple to deploy and use, and the syntax is clean.&lt;/p&gt;

&lt;p&gt;I'm not going to write a step-by-step how to get Terraform installed and running here, but head over to &lt;a href="https://www.youtube.com/channel/UCkn5kcB10r7wx1MkRuSixXQ"&gt;my channel for a tutorial for Windows and Linux&lt;/a&gt;. Instead, I want to cover the principles of the Terraform workflow and how to use modules. I also highly recommend &lt;a href="https://learn.hashicorp.com/terraform/"&gt;the tutorial track&lt;/a&gt; from HashiCorp (Terraform's creator).&lt;/p&gt;

&lt;p&gt;Terraform has a simple architecture. It comprises of the &lt;code&gt;terraform&lt;/code&gt; tool which when run within a directory either; dry-runs (&lt;code&gt;plan&lt;/code&gt;), deploys (&lt;code&gt;apply&lt;/code&gt;) or removes (&lt;code&gt;destroy&lt;/code&gt;) the infrastructure defined in one or more modules. These are the 3 core commands which will be required to manage this deployment. There are no special requirements for where the tool is installed or run from, except connectivity to your provider (in this case internet access to AWS).&lt;/p&gt;

&lt;p&gt;Apart from the automation benefits of using an Infrastructure as Code tool like Terraform, we should be moving away from reinventing the wheel, and thinking more like a developer, using libraries wherever we can. In Terraform, infrastructure definitions can be wrapped up into a module to allow it to be reused elsewhere, just like a programming library. Terraform has access to a large repository of ready-made, battle-tested modules in the &lt;a href="https://registry.terraform.io/"&gt;Terraform Registry&lt;/a&gt;. This tutorial will be making extensive usage of the AWS Modules available in the Registry. I have no problem in admitting that the authors of these modules likely know AWS and Terraform better than me! This tutorial will use a simplified folder structure which I would adapt to facilitate real-world usage. There are various ways to achieve a structure ready for real-world usage, check out the recommended reading at the end for links.&lt;/p&gt;

&lt;p&gt;To this end, our structure is going to look like this;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── main.tf
├── outputs.tf
├── modules
│   ├── compute
│   │   ├── main.tf
│   │   ├── userdata.tmpl
│   │   └── variables.tf
│   ├── database
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── efs
│   │   ├── main.tf
│   │   ├── output.tf
│   │   └── variables.tf
│   ├── media
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── network
│   │   ├── main.tf
│   │   └── outputs.tf
│   └── seeder
│   ├── main.tf
│   └── variables.tf
├── packer
│   ├── aws_vars.json
│   └── template.json
├── salt_tree
│   └── srv
│   ├── pillar
│   └── salt
├── terraform.tfvars
└── wordpressha.pem
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This source tree contains;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt; this contains the entrypoint that all our modules will be called from.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;modules&lt;/code&gt; contains our sections of functionality as per our design analysis above.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;packer&lt;/code&gt; contains the template for our AMI base image. See &lt;a href="https://dev.to/tomwerneruk/creating-a-lamp-ami-using-packer-and-salt-507j-temp-slug-6995235"&gt;Creating a LAMP AMI using Packer and Salt&lt;/a&gt; on how to use this.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;salt_tree&lt;/code&gt; is used by both Packer and Terraform to configure our WordPress installation on our deployed EC2 instances. You could easily swap this out for a different tool i.e Chef or Puppet and change the provisioner in the Terraform code accordingly.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform.tfvars&lt;/code&gt; contains our configuration values to stand up our solution. Empty fields will need completing before running Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying to AWS 🎉
&lt;/h2&gt;

&lt;p&gt;That's the theory over, if you've got this far, you're on the home stretch!&lt;/p&gt;

&lt;p&gt;Assuming you have installed Terraform and Packer correctly, checkout the code from my Git repository at &lt;a href="https://gitlab.com/fluffy-clouds-and-lines/ha-wordpress-using-terraform-and-salt.git"&gt;https://gitlab.com/fluffy-clouds-and-lines/ha-wordpress-using-terraform-and-salt.git&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before proceeding with the Terraform run, we need an SSH keypair (Terraform cannot currently create them). To create your keypair;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logon to your AWS Console,&lt;/li&gt;
&lt;li&gt;Change to your target region, and open EC2,&lt;/li&gt;
&lt;li&gt;Network &amp;amp; Security &amp;gt; Keypairs &amp;gt; Create Key Pair&lt;/li&gt;
&lt;li&gt;Name the Keypair 'wordpressha' and copy the downloaded wordpressha.pem to the directory where the Terraform code has been checked out into,&lt;/li&gt;
&lt;li&gt;On Linux, change permissions to 400 (read only by user).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, ensure that &lt;code&gt;terraform.tfvars&lt;/code&gt; is completed. Once done, execute;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; packer build -var-file=./packer/aws_vars.json ./packer/template.json
&amp;gt; terraform init
# Terraform modules for RDS and VPC don't resolve dependancies correctly, so explictly build VPC first
&amp;gt; terraform apply -target=module.network 
# Deploy Seeder dependancies
&amp;gt; terraform apply -target=module.database -target=module.efs 
# Deploy seeder
&amp;gt; terraform apply -target=module.seeder
# Deploy all to make state consistent
&amp;gt; terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This should take around 15 minutes end to end. This will;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build our custom AMI image with all our LAMP (Apache, MySQL, PHP) dependencies baked in,&lt;/li&gt;
&lt;li&gt;Download the external Terraform modules,&lt;/li&gt;
&lt;li&gt;Build the AWS VPC,&lt;/li&gt;
&lt;li&gt;Deploy the S3 bucket and CloudFront distribution,&lt;/li&gt;
&lt;li&gt;Create the application load balancer and auto-scaling group,&lt;/li&gt;
&lt;li&gt;Deploy the RDS MySQL Database instance,&lt;/li&gt;
&lt;li&gt;Create the Elastic Filesystem,&lt;/li&gt;
&lt;li&gt;Deploy the 'WordPress seeder'. This mounts the EFS and installs WordPress so that nodes that are started as part of the auto-scaling group already have the WordPress installation available to them,&lt;/li&gt;
&lt;li&gt;Publish an A record to Route53, linked to the ALB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should then be able to browse to &lt;a href="http://nextamazing.site/"&gt;http://nextamazing.site/&lt;/a&gt; and see your completed installation.&lt;/p&gt;

&lt;p&gt;Don't like the use of &lt;code&gt;-target&lt;/code&gt;? Yes, it's bad;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This targeting capability is provided for exceptional circumstances, such as recovering from mistakes or working around Terraform limitations. It is &lt;em&gt;not recommended&lt;/em&gt; to use &lt;code&gt;-target&lt;/code&gt; for routine operations, since this can lead to undetected configuration drift and confusion about how the true state of resources relates to configuration.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;See below on suggestions on how to make this work in a real-life scenario. You really shouldn't take this approach going forward in production usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Design Decisions
&lt;/h2&gt;

&lt;p&gt;There are a few more design decisions that need to be made before this could be considered 'production ready'&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The site should really be running on HTTPS, whether this is done with an AWS Managed Certificate (via ACM) or an externally signed CSR made available to the load balancer via ACM or IAM,&lt;/li&gt;
&lt;li&gt;Although the infrastructure is in place for asset delivery via CloudFront, it is not setup in WordPress as part of this Terraform run. There are several options, both free and paid that will achieve this i.e plugins or custom cron jobs,&lt;/li&gt;
&lt;li&gt;How will you maintain backups? At present, the RDS snapshot defaults will be used. How will you backup your WordPress installation?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrapping up...
&lt;/h2&gt;

&lt;p&gt;That's it for now. This should have given a good introduction on how to use Terraform to deploy a full solution on AWS. Earlier I mentioned some simplifications made for the purposes of this blog article, a couple of things to consider;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This article was started before the release of Terraform 0.12, it should be considered whether a new project should be started on 0.12, assuming dependencies are compatible,&lt;/li&gt;
&lt;li&gt;The major creative license I have taken here is to create one large module that needs separate components to be called in a specific order to achieve a specific end result. As some of the modules are dependant on each other (there is no reason why you couldn't run &lt;code&gt;terraform apply&lt;/code&gt; twice and have a successful deployment), I would suggest either breaking this up into distinct modules i.e &lt;code&gt;base&lt;/code&gt;, &lt;code&gt;seeder&lt;/code&gt; and &lt;code&gt;wordpress&lt;/code&gt;, or use a tool like Terragrunt,&lt;/li&gt;
&lt;li&gt;One of the thought leaders in the IaC space, Gruntwork, has developed Terragrunt to improve your Terraform workflow to mitigate potential issues when running in production. One of the big advantages here is being able to compartmentalise Terraform state (the record of what Terraform has deployed) into smaller chunks, to reduce impact in cases of state corruption or loss (a definite possibility). This tool is worth considering in a large, multi-module deployment like this.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Recommended Reading
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Terraform Learning Track (HashiCorp)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learn.hashicorp.com/terraform/"&gt;https://learn.hashicorp.com/terraform/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terragrunt Documentation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gruntwork-io/terragrunt"&gt;https://github.com/gruntwork-io/terragrunt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How things can go wrong with Terraform state&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fluffycloudsandlines.blog/deploying-ha-wordpress-on-aws-using-terraform-and-salt/charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/"&gt;https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Creating a LAMP AMI using Packer and Salt</title>
      <dc:creator>tomwerneruk</dc:creator>
      <pubDate>Thu, 02 May 2019 15:24:00 +0000</pubDate>
      <link>https://dev.to/tomwerneruk/creating-a-lamp-ami-using-packer-and-salt-mdb</link>
      <guid>https://dev.to/tomwerneruk/creating-a-lamp-ami-using-packer-and-salt-mdb</guid>
      <description>&lt;p&gt;This is the first in a series of posts to create an Infrastructure as Code powered deployment of WordPress running on Amazon Web Services.&lt;/p&gt;

&lt;p&gt;This one's going to be pretty short, partly down to the great tools on offer!&lt;/p&gt;

&lt;h2&gt;
  
  
  What create an image? Why Packer? Why Salt?
&lt;/h2&gt;

&lt;p&gt;Creating images with as much of your installation and configuration baked are vital in DevOps environment, where predictability and agility is key. For example, if you have an autoscaling group which is creating a pool of WordPress application servers, installing the Apache, PHP and MySQL Client using a deployment script would mean a node would take too long to enter service. Whereas preparing a customised AMI will mean that the time to enter service is only restricted by the length of time to start the EC2 instance.&lt;/p&gt;

&lt;p&gt;Packer is part of the wider Hashicorp toolset for controlling the cloud via IaC. Packer can create images for a wide range of cloud and on-premise platforms. Being part of the same family of tools there is a degree of similarity between how they work and your infrastructure code will be written.&lt;/p&gt;

&lt;p&gt;Salt is one several configuration tools in the market. I have no real allegiance to any other tool after learning Puppet, Ansible and Salt. Greenfield environments are pretty rare, so you may well be restricted with your current toolset. The concepts used here are portable to other configuration tools, of which Packer supports many!&lt;/p&gt;

&lt;h2&gt;
  
  
  Packer and Salt Quickstart
&lt;/h2&gt;

&lt;p&gt;Packer is extremely easy to get started with, like the rest of the Hashicorp products, it is simply a case of download, extract and run. Head over to my YouTube Channel to watch my how-to video.&lt;/p&gt;

&lt;p&gt;Clone the tutorial repo from &lt;a href="https://gitlab.com/fluffy-clouds-and-lines/packer-and-salt-lamp-ami"&gt;https://gitlab.com/fluffy-clouds-and-lines/packer-and-salt-lamp-ami&lt;/a&gt;. You should have a structure that looks like this;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── aws_vars.json
├── README.md
├── salt_tree
│   └── srv
│   ├── pillar
│   │   ├── apache.sls
│   │   ├── mysql.sls
│   │   └── top.sls
│   └── salt
│   ├── apache
│   ├── mysql
│   ├── php
│   └── top.sls
└── template.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;aws_vars.json&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is used to provide variable data to avoid having to specify it on each invocation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;salt_tree&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Contains the Salt declarations to install and configure are LAMP stack components. This is copied by Packer to the remote host during the build, then executed by Salt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;template.json&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Packer build declaration that specifies the base AMI to use, and how Salt should be invoked during the build process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Packer Run
&lt;/h2&gt;

&lt;p&gt;Running Packer is as simple as it gets. After cloning the repo, all you need to decide is to whether put your credentials into a file or not.&lt;/p&gt;

&lt;p&gt;To prompt (or use your AWS CLI credentials if configured), from the project root, run;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packer build template.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;or, if you have created a variables file;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packer build -var-file=./aws_vars.json template.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The build should take around 5 minutes, to create the base instance, apply the Salt configuration and generate a final AMI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inside template.json
&lt;/h2&gt;

&lt;p&gt;Packer templates have 3 main sections (excluding Post-Processors which are optional and are for specific use-cases);&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Used to abstract from your code sensitive or dynamic information that should be provided to the script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Builders&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The heavy lifting of creating a machine with an appropriate base image to start building upon, and post provisioning, wrapping it up ready for use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provisioners&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The way to actually to apply changes to your build, using scripts, configuration management tools (Chef, Puppet, Ansible, Salt etc).&lt;/p&gt;

&lt;p&gt;Checkout the Packer documentation for the latest list of available Builders and Provisioners.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  }
...
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our variables are declared so they can be used in subsequent parts of the template. They can be provided at runtime, JSON file, environment variables, Consul or Vault (cool eh?).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
...
  "builders": [
    {
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "eu-west-2",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "*ubuntu-bionic-18.04*",
          "root-device-type": "ebs"
        },
        "owners": [
          "099720109477"
        ],
        "most_recent": true
      },
      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "wordpress-ha-node {{timestamp}}"
    }
  ],
 ...
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our builder specifies we want an EBS backed AMI, based upon the latest Ubuntu 18.04 image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
...
  "provisioners": [
    {
      "type": "salt-masterless",
      "local_state_tree": "./salt_tree/srv/salt",
      "local_pillar_roots": "./salt_tree/srv/pillar",
      "salt_call_args": "pillar='{\"role\":\"builder\"}'"
    },
    {
      "type": "shell",
      "inline": [
        "rm -rf /srv/salt",
        "rm -rf /srv/pillar"
      ],
      "execute_command": "sudo sh -c '{{ .Vars }} {{ .Path }}'"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We use two provisioners here (you can have multiple per build), that run in sequence. The first provisioner uses Salt in masterless mode (no central server), uploads the pre-defined Salt tree to the host and applies it. The Second provisioner is to get round Terraform Bug &lt;a href="https://github.com/hashicorp/terraform/issues/20323"&gt;#20323&lt;/a&gt; which stops Terraform re-running Salt Masterless provisioner against this image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inside the Salt Tree
&lt;/h2&gt;

&lt;p&gt;I have found Salt a strange beast to learn, in some cases it is very easy to understand, but some of the terminology takes time to get used too. This is not designed to be a full Salt intro, but more explain the decisions taken with the Salt definition this project uses.&lt;/p&gt;

&lt;p&gt;Salt is a configuration management tool that takes configuration files and uses them to apply a desired state to a system. Salt at a high level uses States to define how to apply the configuration, and Pillars to provide variable data. It is similar to Packer having a template and a separate variables file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/srv/salt/top.sls&lt;/code&gt; is the leader of the show here. It defines which States apply to any given Salt machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;base:
  'role:builder': 
    - match: pillar # Match on 'role' passed in as additional Pillar data via salt_call_args
    - php
    - php.mysql
    - php.mysqlnd
    - apache
    - apache.config
    - apache.vhosts.standard
    - mysql # We don't need MySQL Server (using RDS instead), but can't be removed presently due to bug
    - mysql.config
    - mysql.client
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;top.sls&lt;/code&gt; used in our tree;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Will apply PHP, Apache and MySQL states to the host. Each corresponding folder is a prebuilt Salt state called a Salt Formula. All were sourced from &lt;a href="https://github.com/saltstack-formulas"&gt;https://github.com/saltstack-formulas&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;'role:builder'&lt;/code&gt; is a filter to decide which hosts to apply state too. In our next tutorial you will see how we use the same tree for different purposes based on role. &lt;/li&gt;
&lt;li&gt;Each list item i.e &lt;code&gt;php&lt;/code&gt; or &lt;code&gt;mysql.client&lt;/code&gt; is a folder in the Salt tree. Periods mark subfolders. It is quite common for larger formulas to be split out like this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;/srv/pillar/top.sls&lt;/code&gt; is our Pillar configuration root. This;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defines configuration for the states to be applied by &lt;code&gt;/srv/salt/top.sls&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Some Salt Formulas have defaults that are sensible and therefore will not have a corresponding Pillar entry.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will notice in our Packer Provisioner block we provide the &lt;code&gt;salt_call_args&lt;/code&gt; with a value of &lt;code&gt;"pillar='{"role":"builder"}'"&lt;/code&gt;. This provides supplementary Pillar information that can then be used to decide which parts of the &lt;code&gt;/srv/salt/top.sls&lt;/code&gt; are applied. There is a lot of flexibility around this, and there are other methods to filter this file.&lt;/p&gt;

&lt;p&gt;I am certainly no Salt expert here. I have used pre-built Salt Formulas here and wired them together to create my desired setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;If you've got this far, you should hopefully had a very quick intro to Packer and Salt, and successfully manged to build an image, like so;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amazon-ebs: -------------
    amazon-ebs: Succeeded: 27 (changed=17)
    amazon-ebs: Failed: 0
    amazon-ebs: -------------
    amazon-ebs: Total states run: 27
    amazon-ebs: Total run time: 69.343 s
==&amp;gt; amazon-ebs: Provisioning with shell script: /tmp/packer-shell598436168
==&amp;gt; amazon-ebs: Stopping the source instance...
    amazon-ebs: Stopping instance
==&amp;gt; amazon-ebs: Waiting for the instance to stop...
==&amp;gt; amazon-ebs: Creating AMI wordpress-ha-node 1558710754 from instance i-08f3e01d313901737
    amazon-ebs: AMI: ami-0477fc2kjw982c28e81
==&amp;gt; amazon-ebs: Waiting for AMI to become ready...
==&amp;gt; amazon-ebs: Terminating the source AWS instance...
==&amp;gt; amazon-ebs: Cleaning up any extra volumes...
==&amp;gt; amazon-ebs: No volumes to clean up, skipping
==&amp;gt; amazon-ebs: Deleting temporary security group...
==&amp;gt; amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.

==&amp;gt; Builds finished. The artifacts of successful builds are:
--&amp;gt; amazon-ebs: AMIs were created:
eu-west-2: ami-0477fc2kjw982c28e81
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Happy infrastructure coding! Comments and questions welcome below!&lt;/p&gt;

</description>
      <category>packer</category>
      <category>iac</category>
      <category>aws</category>
    </item>
    <item>
      <title>Using django-allauth for Google Login to any Django App</title>
      <dc:creator>tomwerneruk</dc:creator>
      <pubDate>Wed, 09 Jan 2019 13:33:00 +0000</pubDate>
      <link>https://dev.to/tomwerneruk/using-django-allauth-for-google-login-to-any-django-app-2b2l</link>
      <guid>https://dev.to/tomwerneruk/using-django-allauth-for-google-login-to-any-django-app-2b2l</guid>
      <description>&lt;p&gt;If you're using &lt;a href="https://github.com/pydanny/cookiecutter-django"&gt;django-cookiecutter&lt;/a&gt; for your new projects (and if you're not, you should) you may know that it comes with out of the box user login management, backed up by django-allauth. This package provides you the framework to build your own user management experience.&lt;/p&gt;

&lt;p&gt;A recent app I have been working on, I wanted to allow login via Google, but wanted to ensure that only pre-approved users could login using social login - the application is for a closed user group, not designed for the random public. It's essentially access via invite only. I tried django-invitations, but it felt more geared towards non-social login (sorry guys if I am mis-selling what looks like a great library!). Therefore, it was back to customising django-allauth.&lt;/p&gt;

&lt;p&gt;This tutorial isn't going to cover getting djang-allauth working from scratch - either take a peek at the django-cookiecutter source, or follow the installation from the docs &lt;a href="https://django-allauth.readthedocs.io/en/latest/installation.html"&gt;here&lt;/a&gt;, we're looking at novel ways to change the django-allauth workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing Behaviour
&lt;/h2&gt;

&lt;p&gt;django-allauth allows for the over-riding default behaviour to built custom functionality via the use of adaptors.  Using cookiecutter? a custom adaptor stub has already been created for you under &lt;code&gt;&amp;lt;project name&amp;gt;/users/adapters.py&lt;/code&gt;.  If not, go ahead and create an adapters.py file under one of your Django apps in your project - if you have a 'core' or 'settings' app this might be the best place for it. The location doesn't really matter, as you have to specify the location in your Django settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
ACCOUNT_ADAPTER = 'myapp.users.adapters.AccountAdapter'
SOCIALACCOUNT_ADAPTER = 'myapp.users.adapters.SocialAccountAdapter'
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;_ &lt;strong&gt;settings.py&lt;/strong&gt; _&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from allauth.account.adapter import DefaultAccountAdapter
from allauth.socialaccount.adapter import DefaultSocialAccountAdapter
from django.conf import settings
from django.http import HttpRequest

class AccountAdapter(DefaultAccountAdapter):

    def is_open_for_signup(self, request: HttpRequest):
        return getattr(settings, "ACCOUNT_ALLOW_REGISTRATION", True)

class SocialAccountAdapter(DefaultSocialAccountAdapter):

    def is_open_for_signup(self, request: HttpRequest, sociallogin: Any):
        return getattr(settings, "ACCOUNT_ALLOW_REGISTRATION", True)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;_ &lt;strong&gt;adapters.py&lt;/strong&gt; _&lt;/p&gt;

&lt;p&gt;All credit for this code snip goes to the django-cookiecutter devs, it's simple and get's the point across. At the moment both Account and Social based logins are open for business, this adaptor has no net effect on operation at the moment.&lt;/p&gt;

&lt;p&gt;My requirement is to allow Social login and signup only, so first off, let's disable account based signup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
class AccountAdapter(DefaultAccountAdapter):

    def is_open_for_signup(self, request: HttpRequest):
        return getattr(settings, "ACCOUNT_ALLOW_REGISTRATION", False)
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;_ &lt;strong&gt;adapters.py&lt;/strong&gt; _&lt;/p&gt;

&lt;p&gt;If you head over to &lt;code&gt;http://app/accounts/signup/&lt;/code&gt;, you should be told that signup is closed. Great. Next to get Social Logon working. To take the example of Google, you need to do the following;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ensure you have run &lt;code&gt;./manage.py migrate&lt;/code&gt; to ensure that the required social login tables have been created,&lt;/li&gt;
&lt;li&gt;ensure you have added &lt;code&gt;'allauth.socialaccount.providers.google',&lt;/code&gt; to your INSTALLED_APPS&lt;/li&gt;
&lt;li&gt;head over and create your &lt;a href="https://console.developers.google.com/apis/credentials"&gt;Google OAuth Credentials&lt;/a&gt;,&lt;/li&gt;
&lt;li&gt;Login to your Django Admin and create a new Social Application object of type 'Google'.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you head to &lt;code&gt;http://app/accounts/login/&lt;/code&gt;, Google should be listed as a login provider. Test your Google Login, it should allow you to login with your Google account without any issues. A new user will be  created under Users and Social Account.&lt;/p&gt;

&lt;p&gt;We're nearly there, last step is to restrict to only 'pre-approved' users only. Go ahead and delete via Django Admin the Social Account object created just now (leave the User in place). The way we are going to restrict to pre-approved users is to change the SocialAccountAdapter behaviour. We only want only social logins from an email address with a valid User object already created (either manually via Django Admin or another management screen).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class SocialAccountAdapter(DefaultSocialAccountAdapter):

    def pre_social_login(self, request, sociallogin):
        try:
            get_user_model().objects.get(email=sociallogin.user.email)
        except get_user_model().DoesNotExist:
            from django.contrib import messages
            messages.add_message(request, messages.ERROR, 'Social logon from this account not allowed.') 
            raise ImmediateHttpResponse(HttpResponse(status=500))
        else:
            user = get_user_model().objects.get(email=sociallogin.user.email)
            if not sociallogin.is_existing:
                sociallogin.connect(request, user) 

    def is_open_for_signup(self, request, sociallogin):        
        return True
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is our updated Social adapter. &lt;code&gt;is_open_for_signup&lt;/code&gt; still returns true as even a pre-authorised User still hits the signup code path on their first login. &lt;code&gt;pre_social_login&lt;/code&gt; is invoked when Social Login is completed, but before the conversion to a valid Django session is done. Therefore, this is the point to check for pre-authorisation.&lt;/p&gt;

&lt;p&gt;The try block looks for an existing User against the email provided by the social login request. If a user exists, link the social logon and continue, if not, abort the attempt and leave an update in the 'message' queue.&lt;/p&gt;

&lt;p&gt;That's it, this should give a basis for pre-approved social login. A few improvements to make this production ready;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ensure you force email verification, as some social networks (i.e Facebook) have been flagged as having questionable verification procedures for email addresses. By forcing email verification, it is an extra step of protection,&lt;/li&gt;
&lt;li&gt;Start to override the default templates to provide logon buttons / logos for each provider you decide to use,&lt;/li&gt;
&lt;li&gt;make the user experience slicker by suppressing the need for a username. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A suggested config to get you started;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_EMAIL_VERIFICATION = 'mandatory'
ACCOUNT_ADAPTER = 'myapp.users.adapters.AccountAdapter'
SOCIALACCOUNT_ADAPTER = 'myapp.users.adapters.SocialAccountAdapter'
SOCIALACCOUNT_QUERY_EMAIL = True
SOCIALACCOUNT_AUTO_SIGNUP = True
ACCOUNT_USERNAME_REQUIRED = False
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That's all for now. As ever comments and questions below!&lt;/p&gt;

</description>
      <category>python</category>
      <category>cookiecutter</category>
      <category>django</category>
    </item>
    <item>
      <title>Hardening Traefik when using the Docker Provider</title>
      <dc:creator>tomwerneruk</dc:creator>
      <pubDate>Thu, 06 Dec 2018 14:06:47 +0000</pubDate>
      <link>https://dev.to/tomwerneruk/hardening-traefik-when-using-the-docker-provider-55f3</link>
      <guid>https://dev.to/tomwerneruk/hardening-traefik-when-using-the-docker-provider-55f3</guid>
      <description>&lt;p&gt;This &lt;a href="https://github.com/containous/traefik/issues/4174" rel="noopener noreferrer"&gt;issue&lt;/a&gt; on the Traefik GitHub tracker piqued my interest the other day. The Docker Socket does come up as the Achilles heel at times, with different mitigations to secure it - Proxy Containers, exposing via TLS with Authentication and Authorisation. What you choose is down to your risk appetite and your broader environment.&lt;/p&gt;

&lt;p&gt;In this scenario, having the Docker Socket bound to an Internet-facing container wasn't a risk I wanted to take. A few options have been floated already (Sock Proxy, Switch to TLS etc., Switch to static file config). However, I wanted to take a stab at an alternative route.&lt;/p&gt;

&lt;p&gt;The result is &lt;code&gt;traefik-prism&lt;/code&gt;. It is a simple Python script that takes a valid Traefik Dynamic Config and publishes it to an Internet-facing container, which isn't attached to Docker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2018%2F12%2Ftraefik-prism-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffluffycloudsandlines.blog%2Fcontent%2Fimages%2F2018%2F12%2Ftraefik-prism-architecture.png" alt="Hardening Traefik when using the Docker Provider"&gt;&lt;/a&gt;&lt;strong&gt;Example Deployment for traefik-prism&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It takes advantage of the built in APIs in Traefik, no magic here. It is designed to handle different Traefik providers, not just Docker (hint: comments from other provider users warmly welcomed). Roughly, the script;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;retrieves the current Dynamic config (the frontends and backends) from &lt;code&gt;/api&lt;/code&gt; on the API Endpoint (normally port 8080),&lt;/li&gt;
&lt;li&gt;extracts the frontends and backends from the response, based off the &lt;code&gt;PROVIDERS&lt;/code&gt; environment variable, then merge the potentially multiple blocks (say &lt;code&gt;file&lt;/code&gt; and &lt;code&gt;docker&lt;/code&gt;), into one config,&lt;/li&gt;
&lt;li&gt;finally, pushes the new merged config to &lt;code&gt;/api/endpoints/rest&lt;/code&gt;. The new config should be active immediately.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The security model of pushing from a higher security level zone (in this case a container with access to the Docker socket), and pushing to a lower one is a common pattern for securing systems. There isn't any 'sensitive' data to leak here, so I'm not too bothered about content checking as we move from high to low.&lt;/p&gt;

&lt;p&gt;Check out the code on &lt;a href="https://github.com/tomwerneruk/traefik-prism" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or pull the image straight from &lt;a href="https://hub.docker.com/r/tomwerneruk/traefik-prism/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As ever, thoughts and comments always welcome!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>architecture</category>
      <category>python</category>
    </item>
    <item>
      <title>Why you need Single Node Swarm for Development</title>
      <dc:creator>tomwerneruk</dc:creator>
      <pubDate>Wed, 10 Oct 2018 14:54:00 +0000</pubDate>
      <link>https://dev.to/tomwerneruk/why-you-need-single-node-swarm-for-development-i7k</link>
      <guid>https://dev.to/tomwerneruk/why-you-need-single-node-swarm-for-development-i7k</guid>
      <description>

&lt;p&gt;Docker Swarm has matured enough that it's adoption is starting to pickup. Docker Captains are working with real clients rolling it out everyday. That's fine for clustered workloads, but what about locally?&lt;/p&gt;

&lt;p&gt;Should you be using Swarm Mode even for local development? In my opinion, TL;DR - Yes.&lt;/p&gt;

&lt;p&gt;Swarm Mode has many benefits which only make sense in production multi Docker host scenarios, but used locally it can add value and reduce differences between Development and Production (#win). What does Swarm actually do?&lt;/p&gt;

&lt;p&gt;Swarm elevates a random collection of Docker hosts (or just one) into a cluster, which orchestrates (fancy word for automatic management) starting and stopping containers, managing cluster-wide information and providing transparent connectivity between hosts.&lt;/p&gt;

&lt;p&gt;With one host, transparent connectivity isn't a win, but the same interface for cluster information management (Secrets, Configs, Labels) and the advanced service management is. In the same way that Compose is an evolution over plain Docker, Swarm is a step forward over Compose. Here are my top 3 reasons why you should be thinking about using Swarm locally.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reduce Surprises - Simplification and Consistency
&lt;/h2&gt;

&lt;p&gt;Let's face _ &lt;strong&gt;one of&lt;/strong&gt; _ the elephants in the room. Docker Swarm (as at 18.09) doesn't currently have feature parity with &lt;code&gt;docker run&lt;/code&gt;. Excluding concepts that just plain don't make sense for the service level of abstraction (i.e container names, restart policy), there are some of the more 'fringe' options that are not supported (sysctl Kernel tuning, host device mapping) - although some are currently work-in-progress. There is an excellent tracker of gaps (and progress) &lt;a href="https://github.com/moby/moby/issues/25303"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This means that a compose file that works when targeting &lt;code&gt;docker-compose&lt;/code&gt; isn't guaranteed to work 100% with Docker Swarm without tweaks. Sorry, but nothing is perfect.&lt;/p&gt;

&lt;p&gt;If Swarm is still for you in production, then it makes sense to use Swarm locally to avoid you having to ;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;effectively learn two product 'versions' at once,&lt;/li&gt;
&lt;li&gt;have to maintain two or more compose files in parallel causing potential for more mistakes. &lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Secrets
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/engine/swarm/secrets/"&gt;Docker Secrets&lt;/a&gt; allow the secure publishing of sensitive details for an application to consume, taking the recommendations of the &lt;a href="https://12factor.net/"&gt;12 Factor App&lt;/a&gt; to a new level by providing a much more granular approach compared to environment variables. They are &lt;strong&gt;great&lt;/strong&gt; , but they do have limitations; nothing is entirely secure (the old adage, if you have host access, nothing is secure) and it needs to be supported by the application (or an appropriate entrypoint script).&lt;/p&gt;

&lt;p&gt;Accepting these limitations, they are only available in Docker Swarm, as their implementation is tied to Swarm's internal cluster &lt;a href="https://docs.docker.com/engine/swarm/raft/"&gt;Raft&lt;/a&gt; database. That means, if you're using plain docker or Compose you can't access them (see below for an exception). But as we've just agreed above, your application will likely need changes to take advantage of Secrets, so do you really want to start adding if statements to code to handle Dev vs. Prod? (answer is no).  &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Yes, you can use Docker Secrets with Compose, but they get namespaced by Compose (&amp;lt;stack&amp;gt;_ is prepended) which means that you need to maintain two different references to secrets, depending on environment (again not good, as it causes differences between Dev and Prod) - it is possible, but it is down to you, your development flow and how your code is organised.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This means, in reality, if you want Secrets in your application, you should use Swarm locally. There are ways round it (simulating, injecting environment variables, branching in code), but they all point to the fact you are having to do extra work to get you to the same point.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next hurdle, Secrets are immutable (they can't be viewed or amended once created). This is not development friendly, these values are likely to change as you iterate. Therefore, there is an option to source secrets from a file. Keeping them in files means it is much quicker and easier to tear down a stack and it's secrets and recreate them if you rebuild your environment or need to change a secret.&lt;/p&gt;

&lt;p&gt;For example, on one of my current projects (which has separate development and production compose files - multi-stage builds planned!), I can cleanly reference the same secret in my code, but have a simple workflow to change it in Development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;development.yml&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'

services:
    appserver:    
        image: appserver:latest    
        secrets:      
            - application_secret

secrets:  
    application_secret:    
        file: ./compose/local/secrets/application_secret
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;&lt;a href="https://fluffycloudsandlines.blog/why-you-need-single-node-swarm/development.yml"&gt;p&lt;/a&gt;roduction.yml&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'

services:
    appserver:    
        image: appserver:latest    
        secrets:      
            - application_secret

secrets:  
    application_secret:    
        external: true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The only difference here is the 'source' of the secret.&lt;/p&gt;

&lt;p&gt;Sourcing from a file kind of defeats the purpose of a secret, but it preserves the interface.&lt;/p&gt;

&lt;p&gt;Marking it as external in production requires manual intervention to create the secret, however, in larger organisations the person with the contents of the secret might be very different to the person deploying code. When it comes to deployment where security matters, separation of responsibility keeps the clipboards at bay.&lt;/p&gt;

&lt;p&gt;Keeping as much consistency between environments reduces the complexity of the code that handles our secrets and keeps things clean.&lt;/p&gt;




&lt;h3&gt;
  
  
  Test Scaling
&lt;/h3&gt;

&lt;p&gt;Horizontal scaling of an application requires thought. Just because Docker can scale a service multiple times, doesn't automatically mean your application will handle being run in this manner.&lt;/p&gt;

&lt;p&gt;Run your code at scale locally (just because you have one host, doesn't mean you can't run multiple instances of a container) and rattle these issues out early. This will start to weed out issues with concurrent access to databases, message queues and files on shared storage (to name a few). Swarm will still load balance between multiple instances automatically, even on one host.&lt;/p&gt;




&lt;p&gt;From the background I come from, predictability, simplicity and ease of handover between Development to Operations, trump novelty. In my eyes adopting Swarm locally helps reduce surprises and make life easier throughout the whole Develop and Deploy lifecycle. That's all for now!&lt;/p&gt;

</description>
      <category>docker</category>
    </item>
  </channel>
</rss>
