<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Firmhouse</title>
    <description>The latest articles on DEV Community by Firmhouse (@firmhouse).</description>
    <link>https://dev.to/firmhouse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/firmhouse"/>
    <language>en</language>
    <item>
      <title>Our deployment strategy for on-premise customers (in China)</title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Thu, 10 Sep 2020 10:14:58 +0000</pubDate>
      <link>https://dev.to/firmhouse/our-deployment-strategy-for-on-premise-customers-in-china-4136</link>
      <guid>https://dev.to/firmhouse/our-deployment-strategy-for-on-premise-customers-in-china-4136</guid>
      <description>&lt;p&gt;Last year (2019), one of our customers asked if it was possible to run our full recurring commerce platform to sell product subscriptions in China. That sounded like a great challenge. Hosting something in China is not as straightforward as hosting something for the rest of the world.&lt;/p&gt;

&lt;p&gt;In this article, I'll show you how we've adopted our deployment pipeline to support on-premise deployments for our customers. I'm taking China as an example with some unique challenges. But the following article can be used to set up an on-premise deployment for your application on any type of infrastructure.&lt;/p&gt;

&lt;h1&gt;
  
  
  What's up with China?
&lt;/h1&gt;

&lt;p&gt;Doing business in mainland China is fully locked down for any non-Chinese company. To do business in China, you need a domestic business license for a specific product category, and you need to deal with The Great Firewall. The Great Firewall causes your connection to be really slow or unavailable when you're hosting outside of mainland China. So getting local infrastructure in mainland China is kind of critical.&lt;/p&gt;

&lt;p&gt;Unfortunately, hosting or deploying apps on mainland China servers is also fully locked down and only accessible for Chinese citizens and companies. So that's what we were facing.&lt;/p&gt;

&lt;p&gt;We had set up an on-premise environment of our platform in our client's hosting account on Aliyun (Alibaba Cloud). And in doing this, make sure our existing deployment processes and hosting tooling would still work.&lt;/p&gt;

&lt;h1&gt;
  
  
  Our approach
&lt;/h1&gt;

&lt;p&gt;This is the deployment approach we ended up with. I'll explain below the detailed configuration and setup of each important component. I'll also provide some examples and snippets on specific configuration setups.&lt;/p&gt;

&lt;p&gt;On every PR merge into our main branch on &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;, the following steps are taken:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://circleci.com"&gt;CircleCI&lt;/a&gt; runs our tests, security checks, dependency scanners, etc.&lt;/li&gt;
&lt;li&gt;On a successful build, CircleCI kicks off a special production Docker image build.&lt;/li&gt;
&lt;li&gt;CircleCI uploads the new production &lt;a href="https://docker.com"&gt;Docker&lt;/a&gt; image to a repository on Docker Hub. The Docker image is tagged with the Git commit SHA.&lt;/li&gt;
&lt;li&gt;A developer logs into the on-premise environment and deploys the image via &lt;a href="http://dokku.viewdocs.io/dokku/"&gt;Dokku's&lt;/a&gt; container-based deployment. The Git commit SHA is used to identify the release to deploy.&lt;/li&gt;
&lt;li&gt;Dokku runs its regular deployment steps, application restarts, asset compilation, migrations, etc.&lt;/li&gt;
&lt;li&gt;New release is live!&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Server and database setup
&lt;/h2&gt;

&lt;p&gt;The deployment process is pretty independent from the final server or database setup. All you need to be able to do is run container images and have your various database services available.&lt;/p&gt;

&lt;p&gt;Here's what we used in China on &lt;a href="https://aliyun.com"&gt;Aliyun&lt;/a&gt; (Alibaba Cloud). Parts can easily be adapted or replaced with any other cloud infrastructure or local component via &lt;a href="http://dokku.viewdocs.io/dokku/community/plugins/"&gt;Dokku plugins&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.alibabacloud.com/product/ecs"&gt;Aliyun Elastic Compute Service&lt;/a&gt; instance with Ubuntu LTS.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.alibabacloud.com/product/apsaradb-for-rds-postgresql"&gt;Aliyun ApsaraDB RDS for PostgreSQL&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.alibabacloud.com/product/apsaradb-for-redis"&gt;Aliyun ApsaraDB for Redis&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.terraform.io"&gt;Terraform&lt;/a&gt; and &lt;a href="https://www.ansible.com"&gt;Ansible&lt;/a&gt; for creating provisioning the servers with our standard server setup.&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://dokku.viewdocs.io/dokku/"&gt;Dokku&lt;/a&gt; as on-server hosting and deployment platform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, we used Aliyun Cloud's managed services for PostgreSQL and Redis. However, you can change this up if you're deploying to an on-premise environment on some kind of (virtual) server in a datacenter somewhere.&lt;/p&gt;

&lt;p&gt;You can for example use Dokku's &lt;a href="https://github.com/dokku/dokku-redis"&gt;Redis&lt;/a&gt; and &lt;a href="https://github.com/dokku/dokku-postgres"&gt;PostgreSQL&lt;/a&gt; plugins. These plugins allow you to run Redis and PostgreSQL on the same single server as your application container runs on. Additionally, they make sure that only your application can access these services and by default they are not accessible through the public internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dokku with container-based deploys
&lt;/h2&gt;

&lt;p&gt;We're quite a big fan of &lt;a href="http://dokku.viewdocs.io/dokku/"&gt;Dokku&lt;/a&gt; for relatively simple on-premise deployments. Dokku is very well supported, easy to set up, and you have your apps running in no time.&lt;/p&gt;

&lt;p&gt;Dokku takes care of deployment access control, deploy and migration steps, versions, scaling, webserver hosting, etc. It also has great plugins for backups, various databases, and other components you might need. You can configure ENV vars per Dokku-managed application so that you can set configuration settings and database connections at runtime.&lt;/p&gt;

&lt;p&gt;Dokku is your own little mini app deployment Platform as a Service running on your own infrastructure. &lt;a href="http://dokku.viewdocs.io/dokku/deployment/application-deployment/"&gt;Read here on how to deploy your app via Dokku&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With Dokku, you can either deploy your apps using the &lt;strong&gt;git push&lt;/strong&gt; deployment strategy, or have it deploy your &lt;strong&gt;Docker containers&lt;/strong&gt;. At first, we used the git push deployment method. Later we switched to a Docker container-based deploy method.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why we don't use "git push" for deployment
&lt;/h3&gt;

&lt;p&gt;One problem with the &lt;strong&gt;git push&lt;/strong&gt; method is that every on-premise server your software runs on has a slightly different version of your app running. Even if it's based on the same commit. This is because git push in Dokku will build a new container image on every deploy on every server. So you cannot be certain that your application image is exactly the same on each on-premise environment you manage.&lt;/p&gt;

&lt;p&gt;In addition, the Docker container build is triggered for every deploy on every server. And deploying an app can be quite CPU-intensive for a short period of time. You then risk pulling down our production app if you do not have access to a large enough server.&lt;/p&gt;

&lt;p&gt;For China we had an additional problem with the git push method, related to The Great Firewall. The internet connection from Europe to China is very unreliable and/or very slow. It could sometimes take hours to deploy a single commit, as our codebase would have to be pushed to the server in China. But Dokku also needs to download a lot of images and dependencies during a deploy. We'd see connections, stalling, being paused for hours, or simply timing out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Switching to Docker image-based deployment
&lt;/h3&gt;

&lt;p&gt;So all these problems with &lt;strong&gt;git push&lt;/strong&gt; -based deployment resulted in us switching to the Docker Image-based deployments with Dokku. It was definitely more work to set up but in the end resulted in a much smoother and faster deployment process.&lt;/p&gt;

&lt;p&gt;The Dokku documentation can tell you &lt;a href="http://dokku.viewdocs.io/dokku/deployment/methods/images/"&gt;how to use Docker images for your deployments&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After switching, a deployment from our side basically looked like running the following commands on the on-premise server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker pull firmhouse/platform:&amp;lt;Commit SHA&amp;gt; 
$ docker tag firmhouse/platform:&amp;lt;Commit SHA&amp;gt; dokku/platform:&amp;lt;Commit SHA&amp;gt; 
$ dokku tags:deploy platform &amp;lt;Commit SHA&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Commands to deploy a new release in Dokku





&lt;h2&gt;
  
  
  Building the production Docker image
&lt;/h2&gt;

&lt;p&gt;CircleCI is our CI of choice and it runs our test suites and it builds our Docker containers. It sometimes even deploys our apps straight away.&lt;/p&gt;

&lt;p&gt;Here are a few configuration snippets on how we set things up on CircleCI to build a Docker image and push it to our Docker Hub account.&lt;/p&gt;

&lt;h3&gt;
  
  
  CircleCI configuration for building and pushing the image
&lt;/h3&gt;

&lt;p&gt;We have a special build step in our CircleCI workflow builds our production image and then pushes it to Docker Hub.&lt;/p&gt;

&lt;p&gt;For extra security we have a separate Docker Hub user for every repository so that we can easily revoke access from CircleCI in the case of a breach.&lt;/p&gt;

&lt;p&gt;Here's the relevant parts from our &lt;strong&gt;circle.yml&lt;/strong&gt; configuration. This files lives in our application codebase and is automaticaly picked up by CircleCI on every push to the repository on GitHub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 2
jobs:
  build:
    # Regular build steps. Redacted from this snipper.
  build_and_push_production_image:
    working_directory: ~/circleci-app
    docker:
      - image: circleci/ruby:2.5
    steps:
      - checkout
      - setup_remote_docker
      - run:
          name: Checkout on-premise branch
          command: git checkout master
      - run:
          name: Build Image
          command: docker build -t firmhouse/platform:$CIRCLE_SHA1 . -f Dockerfile-production
      - run:
          name: Tag latest
          command: docker tag firmhouse/platform:$CIRCLE_SHA1 firmhouse/platform:latest
      - run:
          name: Login to Docker Hub
          command: echo $DOCKER_PASSWORD | docker login -u $DOCKER_USER --password-stdin
      - run:
          name: Push commit-specific image to Hub
          command: docker push firmhouse/platform:$CIRCLE_SHA1
      - run:
          name: Push latest tag to Hub
          command: docker push firmhouse/platform:latest

workflows:
  version: 2
  main_flow:
    jobs:
      - build
      - build_and_push_production_image
        requires:
          - build
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Snippet from our CircleCI configuration





&lt;h3&gt;
  
  
  Dockerfile for production
&lt;/h3&gt;

&lt;p&gt;We have a &lt;strong&gt;Dockerfile-production&lt;/strong&gt; in our codebase that is used specifically for building the image to be deployed to production. It uses the officially supported Ruby base images with Alpine as base distribution. It is also set up as a multi-stage build so that we don't leave any development/build dependencies in the final image.&lt;/p&gt;

&lt;p&gt;You'll notice some Ruby on Rails-specific bits in here. Those can be taken out or replaced with what's needed for your framework.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ruby:2.5.8-alpine AS build-env

ARG RAILS_ROOT=/app

RUN apk update \
  &amp;amp;&amp;amp; apk upgrade \
  &amp;amp;&amp;amp; apk add --update --no-cache \
  build-base curl-dev git postgresql-dev \
  yaml-dev zlib-dev nodejs yarn tzdata

ENV RAILS_ENV=production
ENV NODE_ENV=production
ENV BUNDLE_PATH=vendor/bundle
ENV BUNDLE_APP_CONFIG="$RAILS_ROOT/.bundle"
ENV BUNDLE_PATH__SYSTEM=false
ENV RAILS_SERVE_STATIC_FILES=true
ENV RAILS_LOG_TO_STDOUT=true
ENV APP_HOST=dispatch
ENV SMTP_DOMAIN=localhost
ENV SMTP_USERNAME=username
ENV SMTP_PASSWORD=password
ENV SMTP_ADDRESS=xxx
ENV SECRET_KEY_BASE=123
ENV BUNDLER_VERSION 2.0.2

WORKDIR $RAILS_ROOT

COPY Gemfile* package.json yarn.lock ./
COPY Gemfile Gemfile.lock $RAILS_ROOT/
RUN gem install bundler -v 2.0.2
RUN bundle config --global frozen 1 \
  &amp;amp;&amp;amp; bundle install --without test:development:assets -j4 --retry 3 --path=vendor/bundle \
  &amp;amp;&amp;amp; rm -rf vendor/bundle/ruby/2.5.0/cache/*.gem \
  &amp;amp;&amp;amp; find vendor/bundle/ruby/2.5.0/gems -name "*.c" -delete \
  &amp;amp;&amp;amp; find vendor/bundle/ruby/2.5.0/gems -name "*.o" -delete

RUN yarn install --production
COPY . .
RUN bin/rails assets:precompile

RUN rm -rf node_modules tmp/cache app/assets vendor/assets test

FROM ruby:2.5.8-alpine
ARG RAILS_ROOT=/app
ARG RUNTIME_PACKAGES="tzdata postgresql-client nodejs bash file imagemagick"

ENV RAILS_ENV=production
ENV BUNDLE_APP_CONFIG="$RAILS_ROOT/.bundle"
ENV RAILS_SERVE_STATIC_FILES=true
ENV RAILS_LOG_TO_STDOUT=true
ENV BUNDLER_VERSION 2.0.2

WORKDIR $RAILS_ROOT

RUN apk update \
  &amp;amp;&amp;amp; apk upgrade \
  &amp;amp;&amp;amp; apk add --update --no-cache $RUNTIME_PACKAGES
RUN gem install bundler -v 2.0.2

COPY --from=build-env $RAILS_ROOT $RAILS_ROOT
CMD ["bin/rails", "server"]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Our Dockerfile-production





&lt;h2&gt;
  
  
  Docker Hub for hosting our images
&lt;/h2&gt;

&lt;p&gt;We currently use &lt;a href="https://hub.docker.com"&gt;Docker Hub&lt;/a&gt; for hosting our container images. For additional security, we have all our applications in their own Docker Hub repositories. We create additional user accounts per repository/application so we can put their credentials in CircleCI.&lt;/p&gt;

&lt;h2&gt;
  
  
  A pretty decent on-premise deployment mechanism
&lt;/h2&gt;

&lt;p&gt;For us, this is a pretty decent on-premise deployment mechanism. We don't do many on-premise setups anymore as this is truly an exceptional enterprise customer requirement.&lt;/p&gt;

&lt;p&gt;Our main (European) platform runs on Heroku, and we leverage all their nice features to deploy and scale our platform.&lt;/p&gt;

&lt;p&gt;However, having the setup described in this article in place allows us to very easily add any on-premise environments if required by our customers. Since it's based on a container image it is also quite easy to make a scalable version out of this on a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Happy to answer any of your questions about this setup!&lt;/p&gt;

</description>
      <category>deployment</category>
      <category>china</category>
      <category>onpremise</category>
      <category>docker</category>
    </item>
    <item>
      <title>Remove flickering flash messages on Turbolinks</title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Fri, 24 Jul 2020 10:06:21 +0000</pubDate>
      <link>https://dev.to/firmhouse/remove-flickering-flash-messages-on-turbolinks-2odb</link>
      <guid>https://dev.to/firmhouse/remove-flickering-flash-messages-on-turbolinks-2odb</guid>
      <description>&lt;p&gt;If you're using Turbolinks, and flash messages in your Rails app, then this might come in handy.&lt;/p&gt;

&lt;p&gt;You might have seen flickering flash messages on your pages when you re-visit them in your app. This is because Turbolinks caches the full page content in its own internal cache. And this cache includes your flash message if you don't explicitly take it out.&lt;/p&gt;

&lt;p&gt;Thus, when you revisit a page where a flash message was just displayed, you first see that cached page for an instance. Then the flash messages dissapears because Turbolinks asynchronously loads your actual page content via AJAX.&lt;/p&gt;

&lt;p&gt;Here's a snippet we're using to take any flash messages out of the page before sending it to the Turbolinks cache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;document.addEventListener("turbolinks:before-cache", function() {
   const flash_message_element = document.querySelector(".flash")
   if (flash_message_element) {
     flash_message_element.remove()
   }
 })
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
Remove flash message before sending to Turbolinks cache





</description>
      <category>code</category>
      <category>turbolinks</category>
      <category>javascript</category>
      <category>rails</category>
    </item>
    <item>
      <title>Managing app secrets in Kubernetes at Firmhouse</title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Mon, 08 Jul 2019 19:58:32 +0000</pubDate>
      <link>https://dev.to/firmhouse/managing-app-secrets-in-kubernetes-at-firmhouse-eki</link>
      <guid>https://dev.to/firmhouse/managing-app-secrets-in-kubernetes-at-firmhouse-eki</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NAqSYOSX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.michielsikkes.com/content/images/2019/07/shuto-araki-0Nlp0vqSgBY-unsplash.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NAqSYOSX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.michielsikkes.com/content/images/2019/07/shuto-araki-0Nlp0vqSgBY-unsplash.jpg" alt="Managing app secrets in Kubernetes at Firmhouse"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.firmhouse.com"&gt;Firmhouse&lt;/a&gt; we're gradually migrating from a &lt;a href="https://github.com/dokku/dokku"&gt;Dokku&lt;/a&gt;-based infrastructure onto a new &lt;a href="https://kubernetes.io"&gt;Kubernetes&lt;/a&gt;-based infrastructure setup running on &lt;a href="https://www.digitalocean.com/products/kubernetes/"&gt;DigitalOcean Kubernetes&lt;/a&gt;. Out with manual work. In with automation!&lt;/p&gt;

&lt;p&gt;In the new setup we heavily rely on &lt;a href="https://terraform.io"&gt;Terraform&lt;/a&gt; for Infrastructure as Code collaboration and automation. One thing that we're adding to our Terraform repository is automated management of secrets and ENV vars for our app deployments.&lt;/p&gt;

&lt;p&gt;In this article I'll show you how we're managing secrets and ENV vars via Terraform, &lt;a href="https://azure.microsoft.com/services/key-vault/"&gt;Azure Key Vault&lt;/a&gt;, and &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/"&gt;Kubernetes Secrets&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  App Deployment in Kubernetes
&lt;/h1&gt;

&lt;p&gt;First of all, let's show you the thing that this is all about in the end: the application deployment. Our &lt;a href="https://rubyonrails.org"&gt;Ruby on Rails&lt;/a&gt; applications are deployed using a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"&gt;Kubernetes Deployment&lt;/a&gt;. Here is a snippet of one of our applications from our Terraform repository:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;As you can see, we load the application's environment from a single Kubernetes Secret named &lt;em&gt;dispatch-env&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We also put all Kubernetes objects for a given application in a specific &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/"&gt;Kubernetes Namespace&lt;/a&gt;. Following this practice makes sure that by default we don't let secret stuff from one application deployment leak over to a different application.&lt;/p&gt;

&lt;h1&gt;
  
  
  ENV vars via a single Kubernetes Secret
&lt;/h1&gt;

&lt;p&gt;Simply loading the application's environment from a single Kubernetes secret makes it easy to manage the whole runtime environment via a single Terraform resource. Here is a redacted and shortened example of one of our apps:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h1&gt;
  
  
  Secret storage: Azure Key Vault
&lt;/h1&gt;

&lt;p&gt;In our new infrastructure setup we store our secrets in an &lt;a href="https://azure.microsoft.com/services/key-vault/"&gt;Azure Key Vault&lt;/a&gt;. Azure Key Vault is comparable to &lt;a href="https://aws.amazon.com/kms/"&gt;AWS Key Management Service&lt;/a&gt; or &lt;a href="https://www.vaultproject.io"&gt;HashiCorp Vault&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We used to not store secrets at all or have only certain manually generated secrets stored in our &lt;a href="https://1password.com"&gt;1Password&lt;/a&gt; business account.&lt;/p&gt;

&lt;p&gt;Because Terraform can talk to Azure Key Vault via &lt;a href="https://www.terraform.io/docs/providers/azurerm/index.html"&gt;it's Azure provider&lt;/a&gt;, we can now start managing secrets without ever touching them or making then visible to a human eye. This allows us to read a secret key from Azure Key Vault in our Terraform repository:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h1&gt;
  
  
  Setting up the application ENV variables with secrets from Azure Key Vault
&lt;/h1&gt;

&lt;p&gt;So uptil this point we have three things prepared.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Applications are deployed using Kubernetes Deployments in their own namespace.&lt;/li&gt;
&lt;li&gt;An application uses one single Kubernetes Secret to load the environment.&lt;/li&gt;
&lt;li&gt;Azure Key Vault is set up and we can load secrets via Terraform resources.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now let's bring it all together.&lt;/p&gt;

&lt;p&gt;In the following redacted sample you'll see how we're pulling in SMTP login credentials and a database password to load those into the application deployment's ENV:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Why this is great
&lt;/h2&gt;

&lt;p&gt;Such a setup is great for the following reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Anyone in the team can take a look at our Infrastructure as Code git repository to see what ENV vars are configured for a given application deployment.&lt;/li&gt;
&lt;li&gt;Secrets are safely stored in Azure Key Vault, including versioning, timestamps, access logs, and access policies for team members and applications.&lt;/li&gt;
&lt;li&gt;We don't have to manually log in to Dokku servers or into Heroku anymore to define the ENV variables for our apps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What can be improved
&lt;/h2&gt;

&lt;p&gt;One thing that still bugs me is that the actual secret values are stored in a Kubernetes Secret. In practice, this is not worse than storing them in &lt;a href="https://heroku.com"&gt;Heroku&lt;/a&gt; or Dokku. But it would even be greater to have a setup where the secret value is not visible to humans with access to the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.michielsikkes.com/managing-and-deploying-app-secrets-at-firmhouse/Jon%20Arild%20T%C3%B8rresdal"&gt;Jon Arild Tørresdal&lt;/a&gt; has a solution in place at Sparebanken Vest. He has written great blog post about this setup. Check it out here: &lt;a href="https://mrdevops.io/introducing-azure-key-vault-to-kubernetes-931f82364354"&gt;Introducing Azure Key Vault to Kubernetes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>azure</category>
      <category>secrets</category>
    </item>
    <item>
      <title>Running-multi server Dokku: problems and options</title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Tue, 07 May 2019 19:31:18 +0000</pubDate>
      <link>https://dev.to/firmhouse/running-multi-server-dokku-problems-and-options-4d93</link>
      <guid>https://dev.to/firmhouse/running-multi-server-dokku-problems-and-options-4d93</guid>
      <description>&lt;p&gt;This blog post is a collection of resources and thoughts about running applications via &lt;a href="http://dokku.viewdocs.io/dokku/"&gt;Dokku&lt;/a&gt; on a High Available (HA) or multi-server setup. Since Dokku doesn't support multi-server out of the box but there are some efforts to make it work, this post is a meant of an overview of options that are out there.&lt;/p&gt;

&lt;p&gt;For me, there are two reasons for running applications in a multi-server setup:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Most important: being able to apply maintenance updates, update a kernel, and reboot a server without application downtime.&lt;/li&gt;
&lt;li&gt;Making sure our applications can scale "horizontally" to multiple servers at increased load.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So which problems do we need to solve to scale Dokku deployments horizontally? To figure that out, let's first define the simplest imaginable multi-server Dokku setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  The simplest imaginable multi-server Dokku setup
&lt;/h2&gt;

&lt;p&gt;A simple imaginable setup would be: one load balancer (or reverse-proxy), and two "backend" servers that run Dokku and run the apps.&lt;/p&gt;

&lt;p&gt;In this case, we won't expect Dokku to deal with routing or load balancing logic. That's what a load balancer is for. We simply want Dokku to play nice with having a brother on another server nearby, and we want to make maintaining that as easy as can be.&lt;/p&gt;

&lt;p&gt;So what problems do we have to tackle to make this happen?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Dokku "lacks" for multi-server
&lt;/h2&gt;

&lt;p&gt;The most important thing would be that all app definitions, configuration options, ENV vars, domain names, SSL certificates, etc. are all stored on-server. This means that when running two Dokku servers, all application configuration would have to be defined and kept in-sync on &lt;strong&gt;both&lt;/strong&gt; servers in the "simplest imaginable multi-server Dokku setup".&lt;/p&gt;

&lt;p&gt;So we have problem one: &lt;strong&gt;keeping configuration in sync&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second problem is moving up domain mapping and SSL termination to the load balancer. Dokku provides awesome mechanisms for mapping domains to apps, installing SSL certificates, or using LetsEncrypt via the &lt;a href="https://github.com/dokku/dokku-letsencrypt"&gt;dokku-letsencrypt plugin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, since traffic will be coming in on the load balancer (or reverse-proxy), it needs to take over the role of recognizing the domain names, owning the SSL certificates and forwarding the rest of the traffic via an internal network or internally encrypted self-signed certificate to the backend servers.&lt;/p&gt;

&lt;p&gt;So we have problem two: &lt;strong&gt;moving the routing and certificate part of the infrastructure to the load balancer&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So what are the options?
&lt;/h2&gt;

&lt;p&gt;Here's a list of options I think might be useful to start thinking about multi-server Dokku setups:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;(Not really an option) Set up up the two Dokku servers, and keeping them in sync manually. Then putting a load blancer from your cloud provider, or a reverse-proxy like &lt;a href="https://traefik.io"&gt;Treafik&lt;/a&gt; in front to deal with SSL certificates and routing to the two Dokku servers.&lt;/li&gt;
&lt;li&gt;Automating maintenance of your Dokku servers via Ansible. There is a new repository live on GitHub where josegonzalez is working on Ansible scripts to maintain Dokku servers. By managing Dokku installs and their application settings via Ansible, it will already make it way easier to keep two Dokku servers and it's app settings "in sync". You will still need to put a load balancer or reverse-proxy in front of the two servers.
One bigger challenge here is managing secrets. If you configuring your servers via Ansible, you'll need to set up some kind of secrets vault or other method to inject secrets into the Ansible runs when they update your server and app definitions.&lt;/li&gt;
&lt;li&gt;Using different tools for setting up your own PaaS on your own server. This article on ServerFault seems to be updated with recent options, but I haven't taken a look at them personally: &lt;a href="https://serverfault.com/questions/640038/scaling-out-dokku-infrastructure"&gt;https://serverfault.com/questions/640038/scaling-out-dokku-infrastructure&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Wait for &lt;a href="https://www.intercity.io"&gt;Intercity&lt;/a&gt; to support multi-server setups. Intercity is the management panel for Dokku that we've built internally at Firmhouse. It's an open source project that you can run yourself on your own server. We're currently working on a feature to keep application settings in-sync across multiple servers.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>dokku</category>
      <category>cloud</category>
      <category>paas</category>
      <category>highavailability</category>
    </item>
    <item>
      <title>CTO advice: A checklist for using cloud databases securely</title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Mon, 06 May 2019 18:31:32 +0000</pubDate>
      <link>https://dev.to/firmhouse/cto-advice-a-checklist-for-using-cloud-databases-securely-1mia</link>
      <guid>https://dev.to/firmhouse/cto-advice-a-checklist-for-using-cloud-databases-securely-1mia</guid>
      <description>&lt;p&gt;Cloud-provided managed databases are great. Especially when you're CTO of a small company, like me. No sleepless night over (nonexistant) backup procedures, encryption-at-rest, firewalling, critical software updates. Production, Enterprise-grade Redis and PostgreSQL at your fingertips in a matter of minutes.&lt;/p&gt;

&lt;p&gt;Sounds easy, but there are several things you need to consider. The database is not &lt;strong&gt;yours&lt;/strong&gt; , it's &lt;strong&gt;theirs&lt;/strong&gt; - so take good care who you entrust your (customers) data with and what you put in place to make it as secure as can be.&lt;/p&gt;

&lt;p&gt;Here are 8 pieces of advice that are on my "managed database provider" checklist, in no particular order of importance. The following is pretty much a copy-paste from our own risk management assessment and security baseline documents at &lt;a href="https://firmhouse.com"&gt;Firmhouse&lt;/a&gt;. Use at your own will (or peril!)&lt;/p&gt;

&lt;h2&gt;
  
  
  Ensure compliance with legislation and ensure secure standards
&lt;/h2&gt;

&lt;p&gt;Picking just any provider because they offer the easiest and cheapest "one-click install" cloud databases is simply naïve. Always be sure that these providers have some information management certification, like ISO27001. Also make sure that their physical datacenters are operated under PCI standard and have SOC2 Type II reports available.&lt;/p&gt;

&lt;p&gt;On top of that, if you cannot publicly get access to their security procedures or documentation, that's a bad sign.&lt;/p&gt;

&lt;p&gt;Oh, and being able to sign a GDPR-compatible Data Processing Agreement/Addendum is also a pretty must-have! Must-have if you're a European company. And pretty important if you don't want your company to skip the whole European market, readily awaiting to give you money for your service.&lt;/p&gt;

&lt;p&gt;We use &lt;a href="https://aiven.io"&gt;Aiven.io&lt;/a&gt; as our managed database service, and they have pretty detailed information on both their compliance documentation (&lt;a href="https://aiven.io/security-compliance"&gt;https://aiven.io/security-compliance&lt;/a&gt;) and logs of detailed information about their security, storage, and backup procedures (&lt;a href="https://help.aiven.io/security/cloud-security-overview"&gt;https://help.aiven.io/security/cloud-security-overview&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Service accounts and role-based authorization
&lt;/h2&gt;

&lt;p&gt;Always create a service account/user to a database per system that's using it. If you only have one application that reads and write to your database, create a user for that application to your specific database.&lt;/p&gt;

&lt;p&gt;Have some external or 3rd party reporting tool that just reads information from your database for dashboarding or business intelligence? Go ahead and create a read-only service account for just that user on the database.&lt;/p&gt;

&lt;p&gt;Bottom line, just like for any other account: &lt;strong&gt;never share an account between multiple users or services&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup snapshots and point-in-time recovery
&lt;/h2&gt;

&lt;p&gt;A database without automated backups cannot call itself a serious managed database service. Make sure your database provider offers automated backup snapshots and that they also support point-in-time recovery. With point-in-time recovery way you can quickly get your database back into a state from a few hours ago without loosing too much recent data.&lt;/p&gt;

&lt;p&gt;I've never had the need for it luckily, but I'm sure I'm shooting myself in the foot by typing this now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Have off-vendor backups
&lt;/h2&gt;

&lt;p&gt;Yes backups are great. But what if your vendor goes bankrupt or due to some lawsuit is legally required to stop any active services they're providing? For these reasons, always export database snapshots to a 3rd party location and keep them stored there for 14 to 30 days. In the case that your vendor is wiped from the earth for whatever reason, you can at least start a recovery procedure to a different vendor that way.&lt;/p&gt;

&lt;p&gt;Now, let's make sure an attacker can't do anything with the data in case it does get stolen somehow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encryption-at-rest
&lt;/h2&gt;

&lt;p&gt;Making sure live data and backups are encrypted is a must-have. It is a good security measure against the once in a lifetime occurance that someone &lt;strong&gt;does&lt;/strong&gt; get unpermitted access to a hard drive and rips it out of a server. But it's also just something practical: if you want to sell software to The Enterprise and Corporate, this is simply an important security requirement.&lt;/p&gt;

&lt;p&gt;Encrypting your data "at rest" is pretty important. But encrypting it in-transit is even more important. Head over to the next paragraph.&lt;/p&gt;

&lt;h2&gt;
  
  
  SSLmode enabled by default
&lt;/h2&gt;

&lt;p&gt;All serious databases (like PostgreSQL or Redis) allow you to connect to them over SSL. If your managed provider does not support this. Run away fast. Encryption of data-in-transit is just a must have to keep your data secure without people from sniffing around in your client's data.&lt;/p&gt;

&lt;p&gt;However, don't think you're already done by simply using the SSLmode of your database connection! Nasty things can happen if you don't configure the thing from the next paragraph.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure SSL certificate pinning
&lt;/h2&gt;

&lt;p&gt;Most managed database servers generate a self-signed certificate for securing the connection to the database. Without our applications verifying if the database they are talking to, is actually the database they were &lt;strong&gt;meant&lt;/strong&gt; to be talking to, SSL encryption pretty much doesn't matter.&lt;/p&gt;

&lt;p&gt;You need to make sure that your applications will &lt;strong&gt;only connect to the database service with the exact same SSL certificate as they are expecting&lt;/strong&gt;. If your applications allow connections to any database service with an SSL certificate, you get caught by "the man in the middle". With this technique (and some additional hacks in/around your network) someone can spoof the database service and collect all the connection information that it needs. When this happens, the "man in the middle" essentially gets access to your full database, if you haven't applied the next and last security measure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Private Networking/Firewall/IP whitelisting
&lt;/h2&gt;

&lt;p&gt;Last but not least: make sure our managed database service allows some kind of networking constraints. This can either be a private network where your database lives in the same (virtual) network as your application servers. Or it can be a true public firewall with an IP whitelist if the service is accessed over the public internet.&lt;/p&gt;




&lt;p&gt;That's it for now! Hope you enjoyed this post and that this "checklist" helps you in your day-to-day job. Or that it made you realize that you have a security gap somewhere. No biggy! Just calmly fill the hole and you're good to go for a good night's sleep again.&lt;/p&gt;

</description>
      <category>ctoadvice</category>
      <category>development</category>
      <category>database</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Upgrading an existing Intercity installation to the new installer</title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Mon, 29 Apr 2019 19:06:43 +0000</pubDate>
      <link>https://dev.to/firmhouse/upgrading-an-existing-intercity-installation-to-the-new-installer-1cfk</link>
      <guid>https://dev.to/firmhouse/upgrading-an-existing-intercity-installation-to-the-new-installer-1cfk</guid>
      <description>&lt;p&gt;Two days ago, I posted about the updated and simpler installation method I shipped into Intercity's master branch: &lt;a href="https://dev.to/firmhouse/updated-installing-intercity-to-a-single-command-3ah"&gt;Updated installing Intercity to a single command&lt;/a&gt;. The new command works great on a fresh new Ubuntu LTS. 🕺💃&lt;/p&gt;

&lt;p&gt;However, if you've previously installed Intercity via the &lt;code&gt;intercity-server&lt;/code&gt; command as described in the previous &lt;a href="https://github.com/intercity/intercity-next/blob/ba483a3ddbb63555834963abbff9efc88b04922e/doc/installation.md"&gt;docs/installation.md&lt;/a&gt;, you'll have to perform some additional steps as the database setup is different. You'll have to migrate your current database into the database of the new installation. You definitely don't want to loose al your precious app configuration settings and secrets!&lt;/p&gt;

&lt;p&gt;What you'll have to do is the following. I'll explain each step in details below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Export the current Intercity database via &lt;code&gt;pg_dump&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Stop your current Intercity installation.&lt;/li&gt;
&lt;li&gt;Install the new Intercity via the new installation procedure.&lt;/li&gt;
&lt;li&gt;Import the database from your previous installation into the new one.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before you perform the above steps: make sure you have a backup of your server. For example by using the backups/snapshots feature of your VPS provider or Cloud.&lt;/p&gt;

&lt;p&gt;You can also copy the directory &lt;code&gt;/var/intercity/shared/postgres_data&lt;/code&gt; to &lt;code&gt;/var/intercity/shared/postgres_data_backup&lt;/code&gt; for example:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo cp -r /var/intercity/shared/postgres_data /var/intercity/shared/postgres_data_backup
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Ok, so here we go:&lt;/p&gt;

&lt;h2&gt;
  
  
  Export the current database
&lt;/h2&gt;

&lt;p&gt;Look up the Docker container ID of your current Intercity installation:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker ps
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should see something like this:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db76d2869d40    local_intercity/app "/sbin/boot"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;"db76d2869d40" is the container ID of your current Intercity installation. Use it in the next few commands to access a shell in that running container, export the database, and bring it back to your host system:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker exec -it db76d2869d40 bash
(container) # su intercity
(container) $ cd /home/intercity
(container) $ pg_dump -U intercity -d intercity -f intercity.sql
(container) $ exit
(container) # cp /home/intercity/intercity.sql /shared
(container) # exit
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You have now succesfully exported the database from your current Intercity environment into a file &lt;code&gt;intercity.sql&lt;/code&gt; in &lt;code&gt;/var/intercity/shared&lt;/code&gt; on your host system. We'll use this file in one of the next steps to import into your new Intercity installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop your current installation
&lt;/h2&gt;

&lt;p&gt;You can safely stop your current Intercity with the following command, using the container ID you fetched in the previous steps:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker stop db76d2869d40
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Intercity via the new installation procedure
&lt;/h2&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir intercity
$ wget https://raw.githubusercontent.com/intercity/intercity-next/master/scripts/bootstrap.sh
$ sudo bash bootstrap.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After a few minutes your installation is running and the "Create your first user" screen should be visible on the domain name you've configured. Please verify that you see the "Create your first user" screen to ensure your new installation is fully booted up!&lt;/p&gt;

&lt;h2&gt;
  
  
  Import your Intercity database
&lt;/h2&gt;

&lt;p&gt;Now we're going to import the &lt;code&gt;intercity.sql&lt;/code&gt; database into the new Intercity installation.&lt;/p&gt;

&lt;p&gt;Run the following commands to do so:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo -s
# cat /var/intercity/shared/intercity.sql | docker-compose exec db psql -U postgres -d intercity
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should see a lot of output in your terminal. This indicates that the database is being imported. You probably see a lot of errors in this output, telling you relations or indexes already exist. This is fine, as the clean bootstrapped database from the new Intercity install already created those. This command is now only importing the data.&lt;/p&gt;

&lt;p&gt;To check if the import was succesful: head over to the URL that your Intercity installation is hosted on. Instead of the "Create your first user" screen, you should now see the regular login screen again. If you see the login screen: import succesful!&lt;/p&gt;




&lt;p&gt;Awesome! I hoped this procedure worked for you. If not, or if you're getting errors: let me know!&lt;/p&gt;

</description>
      <category>development</category>
      <category>intercity</category>
      <category>installer</category>
      <category>dokku</category>
    </item>
    <item>
      <title>A new single command for installing Intercity </title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Sat, 27 Apr 2019 14:19:25 +0000</pubDate>
      <link>https://dev.to/firmhouse/updated-installing-intercity-to-a-single-command-3ah</link>
      <guid>https://dev.to/firmhouse/updated-installing-intercity-to-a-single-command-3ah</guid>
      <description>&lt;p&gt;It just got extremely easy to install Intercity - the web UI for managing Docker applications, plugins, and application deployments. After the excellent initiative and support by &lt;a href="https://github.com/ariejan"&gt;Ariejan de Vroom&lt;/a&gt; and &lt;a href="https://github.com/jvanbaarsen"&gt;Jeroen van Baarsen&lt;/a&gt; from last November 2018, &lt;a href="https://github.com/intercity/intercity-next/pull/265"&gt;the big PR&lt;/a&gt; is finally, merged in! 🥳&lt;/p&gt;

&lt;p&gt;The new way of installing and updating Intercity is as easy as running the following commands and following the setup procedure:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget https://raw.githubusercontent.com/intercity/intercity-next/master/scripts/bootstrap.sh
$ sudo bash bootstrap.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Running the &lt;code&gt;bootstrap.sh&lt;/code&gt; script will make sure Docker and Docker Compose are installed on your server. It will also ask you some setup questions and the configuration settings file. Finally, it will download and start the required servies to run Intercity, like an nginx-proxy with LetsEncrypt, PostgreSQL, and Redis.&lt;/p&gt;

&lt;p&gt;In theory, running &lt;code&gt;bootstrap.sh&lt;/code&gt; will also allow you to upgrade to the latest version of Intercity as we'll be publishing new releases of Intercity to the officially supported image on Docker Hub: &lt;a href="https://hub.docker.com/r/intercity/intercity_next"&gt;https://hub.docker.com/r/intercity/intercity_next&lt;/a&gt;. Every time the image gets updated, you can run &lt;code&gt;bootstrap.sh&lt;/code&gt; to have Docker Compose pull it down and restart your services.&lt;/p&gt;

&lt;p&gt;Upgrading from the old way of installing Intercity is fairly easy, but you do need to make some file migrations of the PostgreSQL database files. An automated upgrade path or documentation for this will be ready as soon as I've tested this with my personal already running Intercity deployments.&lt;/p&gt;

</description>
      <category>development</category>
      <category>intercity</category>
      <category>installer</category>
      <category>dokku</category>
    </item>
    <item>
      <title>Behind the scenes: Adding custom domain mapping to Airstrip via Traefik.io</title>
      <dc:creator>Michiel Sikkes</dc:creator>
      <pubDate>Fri, 16 Nov 2018 09:55:04 +0000</pubDate>
      <link>https://dev.to/firmhouse/behind-the-scenes-adding-custom-domain-mapping-to-airstrip-via-traefikio-dno</link>
      <guid>https://dev.to/firmhouse/behind-the-scenes-adding-custom-domain-mapping-to-airstrip-via-traefikio-dno</guid>
      <description>&lt;p&gt;At &lt;a href="https://firmhouse.com"&gt;Firmhouse&lt;/a&gt; we have a product called &lt;a href="https://firmhouse.com/products/airstrip"&gt;Airstrip&lt;/a&gt;, which lets people quickly build a website to test and launch their new business proposition. In our current product sprint, we're improving a feature that allows people to automatically map a custom domain to the website they build with Airstrip. This post explains how we're doing that.&lt;/p&gt;

&lt;p&gt;We've had a custom domain mapping feature in there for quite a while now, but it wasn't properly automated yet. How it would work is that we would instruct people to get in touch with our support team. We would then tell them to point their DNS records to our IP or CNAME and we would then manually configure their domain by adding a site to our &lt;a href="https://github.com/dokku/dokku"&gt;Dokku&lt;/a&gt; instance and re-running &lt;code&gt;$ dokku letsencrypt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This way of working is long and requires quite some manual work. There's also no "instant gradification" for our user. It's quite the bummer if you have to wait more than a day before your custom domain is fully working!&lt;/p&gt;

&lt;p&gt;We've also hit the limitations of using the &lt;a href="https://github.com/dokku/dokku-letsencrypt"&gt;Dokku Letsencrypt&lt;/a&gt; plugin for this. Mainly: if one of the domains added to our Airstrip app on Dokku has the wrong DNS records set up, then all domains would fail to renew the certificate.&lt;/p&gt;

&lt;p&gt;So what's our new plan?&lt;/p&gt;

&lt;p&gt;After some researching and consideration, we're going to add &lt;a href="https://traefik.io"&gt;Traefik&lt;/a&gt; to our infrastructure. Traefik is a "Cloud Native Edge Router" (it says so on their website). Now I'm not fully sure what that means but it turns out it's pretty awesome as a thin layer in front of our current infrastructure for custom domain mapping and automatically LetsEncrypt'ing those domains at the same time! Everything we need for now.&lt;/p&gt;

&lt;p&gt;Traefik will be put in front of our current infrastructure that is run on DigitalOcean droplets. The web and app server droplets are provisioned and maintained via Ansible and Intercity. The droplets run Dokku for app deployment and webserver configuration. We'll add another droplet that runs just Traefik.&lt;/p&gt;

&lt;p&gt;We'll configure Traefik with a &lt;em&gt;frontend&lt;/em&gt; per custom domain added by our users. Whenever someone adds their custom domain in the Airstrip user interface, we'll fire an API call to the Traefik REST api endpoint. All of the frontends that are created this way, are then connected to a single &lt;em&gt;backend&lt;/em&gt;: the Airstrip app server we already have in place. Traefik will then take care of requesting a new LetsEncrypt certificate for every custom domain added to Traefik's configuration.&lt;/p&gt;

&lt;p&gt;To illustrate, here's a Treafik configuration file to indicate how this would work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Backends
[backends]

  [backends.backend1]

    [backends.backend1.servers]
      [backends.backend1.servers.server1]
        url = "https://airstrip.firmhouse.com:443"
        weight = 1

# Frontends
[frontends]

  [frontends.frontend1]
    backend = "backend1"
    passHostHeader = true

    [frontends.frontend1.routes]
      [frontends.frontend1.routes.route0]
        rule = "Host:traefik.firmhouse.com"

  [frontends.frontend2]
    backend = "backend1"
    passHostHeader = true

    [frontends.frontend2.routes]
      [frontends.frontend2.routes.route0]
        rule = "Host:traefik2.firmhouse.com"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's pretty awesome that Traefik can take all of this off our hands. We were initially considering writing some custom code to dynamically add/remove domains to Dokku or updating our webserver and LetsEncrypt configurations via other methods. Setting up a small Traefik droplet in front of our infrastructure is only a minor effort compared to that.&lt;/p&gt;

&lt;p&gt;Next up is looking into how we can attach a Key/Value store to Traefik so that our dynamic configration doesn't get lost every time we restart the Treafik server or we need to upgrade the droplet.&lt;/p&gt;

</description>
      <category>traefik</category>
      <category>routing</category>
      <category>letsencrypt</category>
      <category>dokku</category>
    </item>
  </channel>
</rss>
