<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rui Trigo</title>
    <description>The latest articles on DEV Community by Rui Trigo (@rtrigo).</description>
    <link>https://dev.to/rtrigo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rtrigo"/>
    <language>en</language>
    <item>
      <title>Dockerfile Optimization for Fast Builds and Light Images</title>
      <dc:creator>Rui Trigo</dc:creator>
      <pubDate>Fri, 05 Feb 2021 16:59:30 +0000</pubDate>
      <link>https://dev.to/jscrambler/dockerfile-optimization-for-fast-builds-and-light-images-3p34</link>
      <guid>https://dev.to/jscrambler/dockerfile-optimization-for-fast-builds-and-light-images-3p34</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The explanation above was extracted from Docker’s &lt;a href="https://docs.docker.com/engine/reference/builder/"&gt;official docs&lt;/a&gt; and summarizes what a Dockerfile is for. Dockerfiles are important to work with because they are our blueprint, our record of layers added to a Docker base image.&lt;/p&gt;

&lt;p&gt;We will learn how to take advantage of &lt;a href="https://docs.docker.com/engine/reference/builder/#buildkit"&gt;BuildKit&lt;/a&gt; features, a set of enhancements introduced on Docker v18.09. Integrating BuildKit will give us better performance, storage management, and security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;decrease build time;&lt;/li&gt;
&lt;li&gt;reduce image size;&lt;/li&gt;
&lt;li&gt;gain maintainability;&lt;/li&gt;
&lt;li&gt;gain reproducibility;&lt;/li&gt;
&lt;li&gt;understand multi-stage Dockerfiles;&lt;/li&gt;
&lt;li&gt;understand BuildKit features.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;knowledge of Docker concepts&lt;/li&gt;
&lt;li&gt;Docker installed (currently using v19.03)&lt;/li&gt;
&lt;li&gt;a Java app (for this post I used a &lt;a href="https://github.com/jenkins-docs/simple-java-maven-app"&gt;sample Jenkins Maven app&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get to it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Dockerfile example
&lt;/h2&gt;

&lt;p&gt;Below is an example of an unoptimized Dockerfile containing a Java app. This example was taken from &lt;a href="https://youtu.be/JofsaZ3H1qM"&gt;this DockerCon conference talk&lt;/a&gt;. We will walk through several optimizations as we go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; debian&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /app&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;openjdk-11-jdk ssh emacs
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/target/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we may ask ourselves: &lt;strong&gt;how long does it take to build&lt;/strong&gt; at this stage? To answer it, let's create this Dockerfile on our local development computer and tell Docker to build the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# enter your Java app folder&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;simple-java-maven-app-master
&lt;span class="c"&gt;# create a Dockerfile&lt;/span&gt;
vim Dockerfile
&lt;span class="c"&gt;# write content, save and exit&lt;/span&gt;
docker pull debian:latest &lt;span class="c"&gt;# pull the source image&lt;/span&gt;
&lt;span class="nb"&gt;time &lt;/span&gt;docker build &lt;span class="nt"&gt;--no-cache&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; docker-class &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="c"&gt;# overwrite previous layers&lt;/span&gt;
&lt;span class="c"&gt;# notice the build time&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;0,21s user 0,23s system 0% cpu 1:55,17 total&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here’s our answer: our build takes &lt;strong&gt;1m55s&lt;/strong&gt; at this point.&lt;/p&gt;

&lt;p&gt;But what if we just enable BuildKit with no additional changes? Does it make a difference?&lt;/p&gt;

&lt;h3&gt;
  
  
  Enabling BuildKit
&lt;/h3&gt;

&lt;p&gt;BuildKit can be enabled with two methods:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setting the DOCKER_BUILDKIT=1 environment variable when invoking the Docker build command, such as:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time &lt;/span&gt;&lt;span class="nv"&gt;DOCKER_BUILDKIT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 docker build &lt;span class="nt"&gt;--no-cache&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; docker-class &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Enabling Docker BuildKit by default, setting the daemon configuration in the &lt;code&gt;/etc/docker/daemon.json&lt;/code&gt; feature to true, and restarting the daemon:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"features"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"buildkit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  BuildKit Initial Impact
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DOCKER_BUILDKIT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 docker build &lt;span class="nt"&gt;--no-cache&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; docker-class &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;0,54s user 0,93s system 1% cpu 1:43,00 total&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;On the same hardware, the build took ~12 seconds less than before. This means the build got ~10,43% faster with almost no effort.&lt;/p&gt;

&lt;p&gt;But now let’s look at some extra steps we can take to improve our results even further.&lt;/p&gt;

&lt;h3&gt;
  
  
  Order from least to most frequently changing
&lt;/h3&gt;

&lt;p&gt;Because order matters for caching, we'll move the &lt;code&gt;COPY&lt;/code&gt; command closer to the end of the Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; debian&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;openjdk-11-jdk ssh emacs
&lt;span class="k"&gt;RUN &lt;/span&gt;COPY &lt;span class="nb"&gt;.&lt;/span&gt; /app
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/target/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Avoid "COPY ."
&lt;/h3&gt;

&lt;p&gt;Opt for more specific COPY arguments to limit cache busts. Only copy what’s needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; debian&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;openjdk-11-jdk ssh vim
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; target/my-app-1.0-SNAPSHOT.jar /app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Couple apt-get update &amp;amp; install
&lt;/h3&gt;

&lt;p&gt;This prevents using an outdated package cache. Cache them together or do not cache them at all.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; debian&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;openjdk-11-jdk ssh vim
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; target/my-app-1.0-SNAPSHOT.jar /app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Remove unnecessary dependencies
&lt;/h3&gt;

&lt;p&gt;Don’t install debugging and editing tools—you can install them later when you feel you need them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; debian&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    openjdk-11-jdk
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; target/my-app-1.0-SNAPSHOT.jar /app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Remove package manager cache
&lt;/h3&gt;

&lt;p&gt;Your image does not need this cache data. Take the chance to free some space.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; debian&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    openjdk-11-jdk &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; target/my-app-1.0-SNAPSHOT.jar /app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use official images where possible
&lt;/h3&gt;

&lt;p&gt;There are some good reasons to use official images, such as reducing the time spent on maintenance and reducing the size, as well as having an image that is pre-configured for container use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; target/my-app-1.0-SNAPSHOT.jar /app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use specific tags
&lt;/h3&gt;

&lt;p&gt;Don’t use &lt;code&gt;latest&lt;/code&gt; as it’s a rolling tag. That’s asking for unpredictable problems.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:8&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; target/my-app-1.0-SNAPSHOT.jar /app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Look for minimal flavors
&lt;/h3&gt;

&lt;p&gt;You can reduce the base image size. Pick the lightest one that suits your purpose. Below is a short &lt;code&gt;openjdk&lt;/code&gt; images list.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Repository&lt;/th&gt;
&lt;th&gt;Tag&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;openjdk&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;634MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openjdk&lt;/td&gt;
&lt;td&gt;8-jre&lt;/td&gt;
&lt;td&gt;443MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openjdk&lt;/td&gt;
&lt;td&gt;8-jre-slim&lt;/td&gt;
&lt;td&gt;204MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openjdk&lt;/td&gt;
&lt;td&gt;8-jre-alpine&lt;/td&gt;
&lt;td&gt;83MB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Build from a source in a consistent environment
&lt;/h3&gt;

&lt;p&gt;Maybe you do not need the whole JDK. If you intended to use JDK for Maven, you can use a Maven Docker image as a base for your build.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; pom.xml .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;mvn &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-B&lt;/span&gt; package
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fetch dependencies in a separate step
&lt;/h3&gt;

&lt;p&gt;A Dockerfile command to fetch dependencies can be cached. Caching this step will speed up our builds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; pom.xml .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;mvn &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-B&lt;/span&gt; dependency:resolve
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;mvn &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-B&lt;/span&gt; package
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/app/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Multi-stage builds: remove build dependencies
&lt;/h3&gt;

&lt;p&gt;Why use multi-stage builds?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;separate the build from the runtime environment&lt;/li&gt;
&lt;li&gt;DRY&lt;/li&gt;
&lt;li&gt;different details on dev, test, lint specific environments&lt;/li&gt;
&lt;li&gt;delinearizing dependencies (concurrency)&lt;/li&gt;
&lt;li&gt;having platform-specific stages
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine AS builder&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; pom.xml .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;mvn &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-B&lt;/span&gt; dependency:resolve
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;mvn &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-B&lt;/span&gt; package

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:8-jre-alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/target/my-app-1.0-SNAPSHOT.jar /&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Checkpoint
&lt;/h4&gt;

&lt;p&gt;If you build our application at this point,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time &lt;/span&gt;&lt;span class="nv"&gt;DOCKER_BUILDKIT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 docker build &lt;span class="nt"&gt;--no-cache&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; docker-class &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;0,41s user 0,54s system 2% cpu 35,656 total&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;you'll notice our application takes &lt;strong&gt;~35.66 seconds&lt;/strong&gt; to build. It's a pleasant improvement. From now on, we will focus on the features for more possible scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-stage builds: different image flavors
&lt;/h3&gt;

&lt;p&gt;The Dockerfile below shows a different stage for a Debian and an Alpine based image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine AS builder&lt;/span&gt;
…
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:8-jre-jessie AS release-jessie&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/target/my-app-1.0-SNAPSHOT.jar /&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:8-jre-alpine AS release-alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/target/my-app-1.0-SNAPSHOT.jar /&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build a specific image on a stage, we can use the &lt;code&gt;--target&lt;/code&gt; argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time &lt;/span&gt;docker build &lt;span class="nt"&gt;--no-cache&lt;/span&gt; &lt;span class="nt"&gt;--target&lt;/span&gt; release-jessie &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Different image flavors (DRY / global ARG)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; flavor=alpine&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine AS builder&lt;/span&gt;
…
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:8-jre-$flavor AS release&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/target/my-app-1.0-SNAPSHOT.jar /&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;ARG&lt;/code&gt; command can control the image to be built. In the example above, we wrote &lt;code&gt;alpine&lt;/code&gt; as the default flavor, but we can pass &lt;code&gt;--build-arg flavor=&amp;lt;flavor&amp;gt;&lt;/code&gt; on the &lt;code&gt;docker build&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time &lt;/span&gt;docker build &lt;span class="nt"&gt;--no-cache&lt;/span&gt; &lt;span class="nt"&gt;--target&lt;/span&gt; release &lt;span class="nt"&gt;--build-arg&lt;/span&gt; &lt;span class="nv"&gt;flavor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jessie &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Concurrency
&lt;/h3&gt;

&lt;p&gt;Concurrency is important when building Docker images as it takes the most advantage of available CPU threads. In a linear Dockerfile, all stages are executed in sequence. With multi-stage builds, we can have smaller dependency stages be ready for the main stage to use them.&lt;/p&gt;

&lt;p&gt;BuildKit even brings another performance bonus. If stages are not used later in the build, they are directly skipped instead of processed and discarded when they finish. This means that in the stage graph representation, unneeded stages are not even considered.&lt;/p&gt;

&lt;p&gt;Below is an example Dockerfile where a website's assets are built in an &lt;code&gt;assets&lt;/code&gt; stage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine AS builder&lt;/span&gt;
…
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; tiborvass/whalesay AS assets&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;whalesay “Hello DockerCon!” &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; out/assets.html

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:8-jre-alpine AS release&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/my-app-1.0-SNAPSHOT.jar /&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=assets /out /assets&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is another Dockerfile where C and C++ libraries are separately compiled and take part in the &lt;code&gt;builder&lt;/code&gt; stage later on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine AS builder-base&lt;/span&gt;
…

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; gcc:8-alpine AS builder-someClib&lt;/span&gt;
…
&lt;span class="k"&gt;RUN &lt;/span&gt;git clone … ./configure &lt;span class="nt"&gt;--prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/out &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; make &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; make &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; g++:8-alpine AS builder-some CPPlib&lt;/span&gt;
…
&lt;span class="k"&gt;RUN &lt;/span&gt;git clone … &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; cmake …

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; builder-base AS builder&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder-someClib /out /&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder-someCpplib /out /&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  BuildKit Application Cache
&lt;/h3&gt;

&lt;p&gt;BuildKit has a special feature regarding package managers cache. Here are some examples of cache folders typical locations:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Package manager&lt;/th&gt;
&lt;th&gt;Path&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;apt&lt;/td&gt;
&lt;td&gt;/var/lib/apt/lists&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;go&lt;/td&gt;
&lt;td&gt;~/.cache/go-build&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;go-modules&lt;/td&gt;
&lt;td&gt;$GOPATH/pkg/mod&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;npm&lt;/td&gt;
&lt;td&gt;~/.npm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;pip&lt;/td&gt;
&lt;td&gt;~/.cache/pip&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We can compare this Dockerfile with the one presented in the section &lt;strong&gt;Build from the source in a consistent environment&lt;/strong&gt;. This earlier Dockerfile didn't have special cache handling. We can do that with a type of mount called cache: &lt;code&gt;--mount=type=cache&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; maven:3.6-jdk-8-alpine AS builder&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target /root/.m2 &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; mvn package &lt;span class="nt"&gt;-DoutputDirectory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; openjdk:8-jre-alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/target/my-app-1.0-SNAPSHOT.jar /&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [“java”, “-jar”, “/my-app-1.0-SNAPSHOT.jar”]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  BuildKit Secret Volumes
&lt;/h3&gt;

&lt;p&gt;To mix in some security features of BuildKit, let's see how secret type mounts are used and some cases they are meant for. The first scenario shows an example where we need to hide a secrets file, like &lt;code&gt;~/.aws/credentials&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; &amp;lt;baseimage&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;…
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret,id&lt;span class="o"&gt;=&lt;/span&gt;aws,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.aws/credentials,required &lt;span class="se"&gt;\
&lt;/span&gt;./fetch-assets-from-s3.sh
&lt;span class="k"&gt;RUN &lt;/span&gt;./build-scripts.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build this Dockerfile, pass the &lt;code&gt;--secret&lt;/code&gt; argument like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;--secret&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aws,src&lt;span class="o"&gt;=&lt;/span&gt;~/.aws/credentials
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second scenario is a method to avoid commands like &lt;code&gt;COPY ./keys/private.pem /root .ssh/private.pem&lt;/code&gt;, as we don't want our SSH keys to be stored on the Docker image after they are no longer needed. BuildKit has an &lt;code&gt;ssh&lt;/code&gt; mount type to cover that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; openssh-client
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0700 ~/.ssh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ssh-keyscan github.com &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.ssh/known_hosts
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; REPO_REF=19ba7bcd9976ef8a9bd086187df19ba7bcd997f2&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssh,required git clone git@github.com:org/repo /work &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; /work &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="nv"&gt;$REPO_REF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build this Dockerfile, you need to load your private SSH key into your &lt;code&gt;ssh-agent&lt;/code&gt; and add &lt;code&gt;--ssh=default&lt;/code&gt;, with &lt;code&gt;default&lt;/code&gt; representing the SSH private key location.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;ssh-agent&lt;span class="si"&gt;)&lt;/span&gt;
ssh-add ~/.ssh/id_rsa &lt;span class="c"&gt;# this is the SSH key default location&lt;/span&gt;
docker build &lt;span class="nt"&gt;--ssh&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This concludes our demo on using Docker BuildKit to optimize your Dockerfiles and consequentially speed up your images’ build time.&lt;/p&gt;

&lt;p&gt;These speed gains result in much-needed savings in time and computational power, which should not be neglected.&lt;/p&gt;

&lt;p&gt;Like Charles Duhigg wrote on The Power of Habit: "&lt;em&gt;small victories are the consistent application of a small advantage&lt;/em&gt;". You will definitely reap the benefits if you build good practices and habits.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>node</category>
      <category>devops</category>
    </item>
    <item>
      <title>The 6 Aspects You Must Secure On Your MongoDB Instances</title>
      <dc:creator>Rui Trigo</dc:creator>
      <pubDate>Mon, 30 Nov 2020 10:57:50 +0000</pubDate>
      <link>https://dev.to/jscrambler/the-6-aspects-you-must-secure-on-your-mongodb-instances-38d8</link>
      <guid>https://dev.to/jscrambler/the-6-aspects-you-must-secure-on-your-mongodb-instances-38d8</guid>
      <description>&lt;p&gt;After going through the adventure of &lt;a href="https://blog.jscrambler.com/how-to-achieve-mongo-replication-on-docker/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;deploying a high-availability MongoDB cluster on Docker&lt;/a&gt; and sharing it publicly, I decided to complement that tutorial with some security concerns and tips.&lt;/p&gt;

&lt;p&gt;In this post, you'll learn a few details about MongoDB deployment vulnerabilities and security mechanisms. And more importantly, how to actually protect your data with these features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;understand database aspects of security.&lt;/li&gt;
&lt;li&gt;find ways to implement authentication, authorization, and accounting (&lt;a href="https://en.wikipedia.org/wiki/AAA_(computer_security)"&gt;AAA&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;learn how to enable MongoDB security features.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;Any running MongoDB instance on which you have full access will do. Standalone or replica set, containerized or not. We will also mention some details on MongoDB Docker instances, but we’ll keep Docker-specific security tips for another post.&lt;/p&gt;

&lt;h2&gt;
  
  
  List of Quick Wins
&lt;/h2&gt;

&lt;p&gt;Accessing data in a database has several stages. We will take a look at these stages and find ways to harden them, to get a cumulative security effect at the end. Each of these stages will, most of the time, have the ability to block the next one (e.g. you need to have network access to get to the authentication part).&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Network access
&lt;/h3&gt;

&lt;p&gt;MongoDB’s default port is 27017 (TCP). Choosing a different port to operate might confuse some hackers, but it is still a minor security action because of port scanning, so you won't get that much out of it.&lt;/p&gt;

&lt;p&gt;Assuming we choose the default port for our service, we will open that port on the database server's firewall. We do not wish to expose the traffic from this port to the internet. There are two approaches to solve that and both can be used simultaneously. One is limiting your traffic to your trusted servers through firewall configuration.&lt;/p&gt;

&lt;p&gt;There’s a MongoDB feature you can use for this: &lt;a href="https://docs.mongodb.com/manual/core/security-mongodb-configuration/"&gt;IP Binding&lt;/a&gt;. You pass the &lt;code&gt;--bind_ip&lt;/code&gt; argument on the MongoDB launch command to enable it. Let's say your &lt;code&gt;app1&lt;/code&gt; server needs to access the MongoDB server for data. To limit traffic for that specific server, you start your server as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongod &lt;span class="nt"&gt;--bind_ip&lt;/span&gt; localhost,app1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using Docker, you can avoid this risk by using a Docker network between your database and your client application.&lt;/p&gt;

&lt;p&gt;You can add another layer of network security by creating a dedicated network segment for databases, in which you apply an ACL (access list) in the router and/or switch configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. System access
&lt;/h3&gt;

&lt;p&gt;The second A in AAA means authorization. We know privileged shell access is needed during database installation. When concluding the installation, locking system root user access is part of the drill.&lt;/p&gt;

&lt;p&gt;Data analysts need to read database data and applications also need to read and (almost always) write data as well. As this can be addressed with database authentication (more on this on &lt;strong&gt;4. Authorization&lt;/strong&gt;), make sure to restrict root and other shell access to people who can't do their jobs without it. Only allow it for database and system administrators.&lt;/p&gt;

&lt;p&gt;Furthermore, running MongoDB processes with a dedicated operating system user account is a good practice. Ensure that this account has permission to access data but no unnecessary permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Authentication
&lt;/h3&gt;

&lt;p&gt;Authentication is the first A in AAA. Authentication-wise, MongoDB supports 4 mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SCRAM (default)&lt;/li&gt;
&lt;li&gt;x.509 certificate authentication&lt;/li&gt;
&lt;li&gt;LDAP proxy authentication&lt;/li&gt;
&lt;li&gt;Kerberos authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are using &lt;a href="https://www.mongodb.com/try/download/enterprise"&gt;MongoDB Enterprise Server&lt;/a&gt;, then you can benefit from LDAP and Kerberos support. Integrating your company identity and access management tool will make AAA 3rd A (Accounting) implementation easier, as every user will have a dedicated account associated with his records.&lt;/p&gt;

&lt;p&gt;MongoDB has &lt;a href="https://docs.mongodb.com/manual/core/security-scram/#scram"&gt;its own SCRAM implementations&lt;/a&gt;: &lt;strong&gt;SCRAM_SHA1&lt;/strong&gt; for versions below 4.0 and &lt;strong&gt;SCRAM_SHA256&lt;/strong&gt; for 4.0 and above. You can think of &lt;a href="https://www.thesslstore.com/blog/difference-sha-1-sha-2-sha-256-hash-algorithms/"&gt;SHA-256 as the successor of SHA-1&lt;/a&gt;, so pick the latter if available on your database version.&lt;/p&gt;

&lt;p&gt;Replica sets &lt;a href="https://docs.mongodb.com/manual/core/security-internal-authentication/#keyfiles"&gt;keyfiles&lt;/a&gt; also use the SCRAM authentication mechanism where these keyfiles contain the shared password between the replica set members. Another internal authentication mechanism supported in replica sets is x.509. You can read more on replica sets and how to generate keyfiles in our previous &lt;a href="https://blog.jscrambler.com/how-to-achieve-mongo-replication-on-docker/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;blog post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To be able to use the x.509 certificates authentication mechanism, there are some &lt;a href="https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/#client-x-509-certificate"&gt;requirements regarding certificate attributes&lt;/a&gt;. To enable x.509 authentication, add &lt;code&gt;--tlsMode&lt;/code&gt;, &lt;code&gt;--tlsCertificateKeyFile&lt;/code&gt; and &lt;code&gt;--tlsCAFile&lt;/code&gt; (in case the certificate has a certificate authority). To perform remote connections to the database, specify the &lt;code&gt;--bind_ip&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongod &lt;span class="nt"&gt;--tlsMode&lt;/span&gt; requireTLS &lt;span class="nt"&gt;--tlsCertificateKeyFile&lt;/span&gt; &amp;lt;path to TLS/SSL certificate and key PEM file&amp;gt; &lt;span class="nt"&gt;--tlsCAFile&lt;/span&gt; &amp;lt;path to root CA PEM file&amp;gt; &lt;span class="nt"&gt;--bind_ip&lt;/span&gt; &amp;lt;hostnames&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To generate these certificates, you can use the &lt;code&gt;openssl&lt;/code&gt; library on Linux or the equivalent on other operating systems.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl x509 &lt;span class="nt"&gt;-in&lt;/span&gt; &amp;lt;pathToClientPEM&amp;gt; &lt;span class="nt"&gt;-inform&lt;/span&gt; PEM &lt;span class="nt"&gt;-subject&lt;/span&gt; &lt;span class="nt"&gt;-nameopt&lt;/span&gt; RFC2253
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command returns the subject string as well as the certificate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;subject&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;CN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;myName,OU&lt;span class="o"&gt;=&lt;/span&gt;myOrgUnit,O&lt;span class="o"&gt;=&lt;/span&gt;myOrg,L&lt;span class="o"&gt;=&lt;/span&gt;myLocality,ST&lt;span class="o"&gt;=&lt;/span&gt;myState,C&lt;span class="o"&gt;=&lt;/span&gt;myCountry
&lt;span class="nt"&gt;-----BEGIN&lt;/span&gt; CERTIFICATE-----
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;span class="nt"&gt;-----END&lt;/span&gt; CERTIFICATE-----
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, add a user on the &lt;strong&gt;$external&lt;/strong&gt; database using the obtained &lt;strong&gt;subject&lt;/strong&gt; string like in the example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getSiblingDB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$external&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;runCommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;createUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
         &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;readWrite&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
         &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;userAdminAnyDatabase&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;writeConcern&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;w&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;majority&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;wtimeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, connect to the database with the arguments for TLS, certificates location, CA file location, authentication database, and the authentication mechanism.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongo &lt;span class="nt"&gt;--tls&lt;/span&gt; &lt;span class="nt"&gt;--tlsCertificateKeyFile&lt;/span&gt; &amp;lt;path to client PEM file&amp;gt; &lt;span class="nt"&gt;--tlsCAFile&lt;/span&gt; &amp;lt;path to root CA PEM file&amp;gt;  &lt;span class="nt"&gt;--authenticationDatabase&lt;/span&gt; &lt;span class="s1"&gt;'$external'&lt;/span&gt; &lt;span class="nt"&gt;--authenticationMechanism&lt;/span&gt; MONGODB-X509
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You have now successfully connected to your database using the x.509 authentication mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Authorization
&lt;/h3&gt;

&lt;p&gt;For non-testing environments (like production) it is clearly not recommended to have &lt;a href="https://docs.mongodb.com/manual/tutorial/enable-authentication/"&gt;Access Control&lt;/a&gt; disabled, as this grants all privileges to any successful access to the database. To enable authentication, follow the procedure below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# start MongoDB without access control&lt;/span&gt;
mongod
&lt;span class="c"&gt;# connect to the instance&lt;/span&gt;
mongo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// create the user administrator&lt;/span&gt;
&lt;span class="nx"&gt;use&lt;/span&gt; &lt;span class="nx"&gt;admin&lt;/span&gt;
&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;myUserAdmin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;pwd&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;passwordPrompt&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// or cleartext password&lt;/span&gt;
    &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;userAdminAnyDatabase&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;readWriteAnyDatabase&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// shutdown mongod instance&lt;/span&gt;
&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;adminCommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;shutdown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# start MongoDB with access control&lt;/span&gt;
mongo &lt;span class="nt"&gt;--auth&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using &lt;a href="https://hub.docker.com/_/mongo"&gt;MongoDB on Docker&lt;/a&gt;, you can create an administrator through &lt;code&gt;MONGO_INITDB_ROOT_USERNAME&lt;/code&gt; and &lt;code&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/code&gt; environment variables (&lt;code&gt;-e&lt;/code&gt; argument). Like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MONGO_INITDB_ROOT_USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;username&amp;gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MONGO_INITDB_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;password&amp;gt; mongo:4.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do not neglect human usability convenience. Make sure all &lt;a href="https://www.webroot.com/us/en/resources/tips-articles/how-do-i-create-a-strong-password"&gt;passwords are strong&lt;/a&gt;, fit your company's password policy, and are stored securely.&lt;/p&gt;

&lt;p&gt;MongoDB has a set of &lt;a href="https://docs.mongodb.com/manual/reference/built-in-roles/"&gt;built-in roles&lt;/a&gt; and allows us to &lt;a href="https://docs.mongodb.com/manual/tutorial/manage-users-and-roles/#create-a-user-defined-role"&gt;create new ones&lt;/a&gt;. Use roles to help when giving privileges while applying the &lt;a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege"&gt;principle of least privilege&lt;/a&gt; on user accounts and avoid user account abuse.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Encrypted connections
&lt;/h3&gt;

&lt;p&gt;Let's now see how to configure encrypted connections to protect you from &lt;a href="https://en.wikipedia.org/wiki/Sniffing_attack"&gt;sniffing attacks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you think about internet browsers, you notice how they keep pressing for users to navigate on sites that support HTTP over TLS, also known as &lt;a href="https://en.wikipedia.org/wiki/HTTPS"&gt;HTTPS&lt;/a&gt;. That enforcement exists for a reason: sensitive data protection, both for the client and the server. TLS is therefore protecting this sensitive data during the client-server communication, bidirectionally.&lt;/p&gt;

&lt;p&gt;We have explained how to use TLS certificates on &lt;strong&gt;4. Authentication&lt;/strong&gt; and now we will see how to encrypt our communications between the database server and a client app through TLS configuration on the application’s MongoDB driver.&lt;/p&gt;

&lt;p&gt;First, to configure the MongoDB server to require our TLS certificate, add the &lt;code&gt;--tlsMode&lt;/code&gt; and &lt;code&gt;--tlsCertificateKeyFile&lt;/code&gt; arguments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongod &lt;span class="nt"&gt;--tlsMode&lt;/span&gt; requireTLS &lt;span class="nt"&gt;--tlsCertificateKeyFile&lt;/span&gt; &amp;lt;pem&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test the connection to mongo shell, type in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongo &lt;span class="nt"&gt;--tls&lt;/span&gt; &lt;span class="nt"&gt;--host&lt;/span&gt; &amp;lt;hostname.example.com&amp;gt; &lt;span class="nt"&gt;--tlsCertificateKeyFile&lt;/span&gt; &amp;lt;certificate_key_location&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, add TLS options to the database connection on your application code. Here is a snippet of a NodeJS application using MongoDB’s official driver package. You can find more of these encryption options on the &lt;a href="http://mongodb.github.io/node-mongodb-native/3.1/tutorials/connect/ssl/"&gt;driver documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;MongoClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Read the certificate authority&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ca&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__dirname&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/ssl/ca.pem&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)];&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb://localhost:27017?ssl=true&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;sslValidate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;sslCA&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;ca&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Connect validating the returned certificates from the server&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Encryption at rest
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/try/download/enterprise"&gt;MongoDB Enterprise Server&lt;/a&gt; comes with an &lt;a href="https://docs.mongodb.com/manual/core/security-encryption-at-rest/"&gt;Encryption at Rest&lt;/a&gt; feature. Through a master and database keys system, this allows us to store our data in an encrypted state by configuring the field as encrypted on rest. You can learn more about the supported standards and enciphering/deciphering keys on the &lt;a href="https://docs.mongodb.com/manual/core/security-encryption-at-rest/#encryption-process"&gt;MongoDB documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the other side, if you will stick with the &lt;a href="https://www.mongodb.com/try/download/community"&gt;MongoDB Community&lt;/a&gt;, on v4.2 MongoDB started supporting &lt;a href="https://docs.mongodb.com/manual/core/security-client-side-encryption/"&gt;Client-Side Field Level Encryption&lt;/a&gt;. Here’s how it works: you generate the necessary keys and load them in your &lt;a href="https://docs.mongodb.com/drivers/"&gt;database driver&lt;/a&gt; (e.g. NodeJS MongoDB driver). Then, you will be able to encrypt your data before storing it in the database and decrypt it for your application to read it.&lt;/p&gt;

&lt;p&gt;Below, you can find a JavaScript code snippet showing data encryption and decryption happening on MongoDB’s NodeJS driver with the help of the npm package &lt;a href="https://www.npmjs.com/package/mongodb-client-encryption"&gt;mongodb-client-encryption&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;unencryptedClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;useUnifiedTopology&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;unencryptedClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;clientEncryption&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;ClientEncryption&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;unencryptedClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;kmsProviders&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;keyVaultNamespace&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;encryptMyData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;keyId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;clientEncryption&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createDataKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;local&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;keyId&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;keyId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;clientEncryption&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;encrypt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;keyId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;algorithm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;decryptMyValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;clientEncryption&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;decrypt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;encryptMyData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sensitive_data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;unencryptedClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;coll&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;insertOne&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mKey&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;findOne&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mKey&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;encrypted:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;decrypteddata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;decryptMyValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;decrypted:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;decrypteddata&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;unencryptedClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While this post attempts to cover some of the most important quick wins you can achieve to secure your MongoDB instances, there is much more to MongoDB security.&lt;/p&gt;

&lt;p&gt;Upgrading database and driver versions frequently, connecting a monitoring tool, and keeping track of database access and configuration are also good ideas to increase security.&lt;/p&gt;

&lt;p&gt;Nevertheless, even if the system was theoretically entirely secured, it is always prone to human mistakes. Make sure the people working with you are conscious of the importance of keeping data secured - properly securing a system is always contingent on all users taking security seriously.&lt;/p&gt;

&lt;p&gt;Security is everyone's job. Like in tandem kayaks, it only makes sense if everyone is paddling together in the same direction, with all efforts contributing to the same purpose.&lt;/p&gt;

&lt;p&gt;Lastly, although this post has focused on database security, it’s also advisable that you protect the JavaScript source code of your web and mobile apps. See our tutorials on protecting &lt;a href="https://blog.jscrambler.com/protecting-your-react-js-source-code-with-jscrambler/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;React&lt;/a&gt;, &lt;a href="https://blog.jscrambler.com/how-to-protect-angular-code-against-theft-and-reverse-engineering/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;Angular&lt;/a&gt;, &lt;a href="https://blog.jscrambler.com/how-to-protect-your-vue-js-application-with-jscrambler/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;Vue&lt;/a&gt;, &lt;a href="https://blog.jscrambler.com/how-to-protect-react-native-apps-with-jscrambler/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;React Native&lt;/a&gt;, &lt;a href="https://blog.jscrambler.com/protecting-hybrid-mobile-apps-with-ionic-and-jscrambler/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;Ionic&lt;/a&gt;, and &lt;a href="https://blog.jscrambler.com/protecting-your-nativescript-source-code-with-jscrambler/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=6-aspects-secure-mongo"&gt;NativeScript&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>node</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Connecting Sequelize To a PostgreSQL Cluster</title>
      <dc:creator>Rui Trigo</dc:creator>
      <pubDate>Tue, 08 Sep 2020 11:44:41 +0000</pubDate>
      <link>https://dev.to/jscrambler/connecting-sequelize-to-a-postgresql-cluster-4l3p</link>
      <guid>https://dev.to/jscrambler/connecting-sequelize-to-a-postgresql-cluster-4l3p</guid>
      <description>&lt;h2&gt;
  
  
  Prologue
&lt;/h2&gt;

&lt;p&gt;In a &lt;a href="https://blog.jscrambler.com/how-to-automate-postgresql-and-repmgr-on-vagrant/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=connecting-sequelize"&gt;previous post&lt;/a&gt;, I showed how to automate a PostgreSQL fault-tolerant cluster with Vagrant and Ansible.&lt;/p&gt;

&lt;p&gt;This kind of setup makes our database cluster resilient to server failure and keeps the data available with no need for human interaction. But what about the apps using this database? Are they fault-tolerant too?&lt;/p&gt;

&lt;p&gt;ORMs like Sequelize have &lt;a href="https://sequelize.org/master/manual/read-replication.html"&gt;read replication&lt;/a&gt; features, which allows you to define your primary and standby nodes in the database connection. But what happens if your primary node, which is responsible for write operations, is offline and your app needs to continue saving data on your database?&lt;/p&gt;

&lt;p&gt;One way to solve this is by adding an extra layer to the system - a load balancing layer - using PostgreSQL third-party tools like &lt;a href="http://www.pgbouncer.org/"&gt;pgbouncer&lt;/a&gt; or &lt;a href="https://wiki.postgresql.org/wiki/Pgpool-II"&gt;Pgpool-II&lt;/a&gt; or even a properly configured &lt;a href="http://www.haproxy.org/"&gt;HAproxy&lt;/a&gt; instance. Besides the complexity brought by this method, you could also be introducing an undesired &lt;a href="https://en.wikipedia.org/wiki/Single_point_of_failure"&gt;single point of failure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another way is using a floating IP address/virtual IP address to assign to the current primary database node, so the application knows which node it must connect to when performing write operations even if another node takes up the primary role.&lt;/p&gt;

&lt;p&gt;We will be using Digital Ocean for server creation and floating IP assignment, but the strategy also works with other cloud providers who support floating IP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;connecting a &lt;strong&gt;NodeJS&lt;/strong&gt; application with &lt;strong&gt;Sequelize&lt;/strong&gt; to a &lt;strong&gt;PostgreSQL&lt;/strong&gt; cluster in order to write to the primary node and read from standby nodes;&lt;/li&gt;
&lt;li&gt;create and assign a &lt;strong&gt;Digital Ocean Floating IP&lt;/strong&gt; (aka FLIP) to our current primary database node;&lt;/li&gt;
&lt;li&gt;make &lt;strong&gt;repmgr&lt;/strong&gt; interact with &lt;strong&gt;Digital Ocean CLI&lt;/strong&gt; to reassign FLIP to new primary node on promotions;&lt;/li&gt;
&lt;li&gt;keep this switchover transparent to the &lt;strong&gt;NodeJS&lt;/strong&gt; application, so the whole system works without human help.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;a &lt;strong&gt;Digital Ocean&lt;/strong&gt; account and API token (&lt;a href="https://m.do.co/c/00ac35d4c268"&gt;create an account using my referral to get free credits&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;PostgreSQL&lt;/strong&gt; cluster with &lt;strong&gt;repmgr&lt;/strong&gt; on &lt;strong&gt;Digital Ocean&lt;/strong&gt; (you can grab the &lt;strong&gt;Ansible&lt;/strong&gt; playbook in this &lt;a href="https://blog.jscrambler.com/how-to-automate-postgresql-and-repmgr-on-vagrant/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=connecting-sequelize"&gt;tutorial&lt;/a&gt; to configure it or just use a cluster with streaming replication and simulate failure + manual promotion);&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/download/"&gt;NodeJS&lt;/a&gt; and &lt;a href="https://www.npmjs.com/"&gt;npm&lt;/a&gt; installed (I'm using &lt;strong&gt;NodeJS&lt;/strong&gt; v12 with &lt;strong&gt;npm&lt;/strong&gt; v6);&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;PostgreSQL&lt;/strong&gt; user with password authentication which accepts remote connections from your application host (I'll be using &lt;code&gt;postgres&lt;/code&gt;:&lt;code&gt;123456&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Set up your cluster
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create your droplets
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D5KugXZ_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/08/jscrambler-blog-connecting-sequelize-to-postgresql-cluster-create-droplet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D5KugXZ_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/08/jscrambler-blog-connecting-sequelize-to-postgresql-cluster-create-droplet.png" alt="jscrambler-blog-connecting-sequelize-to-postgresql-cluster-create-droplet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create 3 droplets, preferably with the Ubuntu 20.04 operating system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pg1 (primary)&lt;/li&gt;
&lt;li&gt;pg2 (standby)&lt;/li&gt;
&lt;li&gt;pg3 (witness)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make configurations run smoother, add your public SSH key when creating the droplets. You can also use the key pair I provided on &lt;a href="https://github.com/JscramblerBlog/postgres-repmgr-vagrant/tree/master/provisioning/roles/ssh/files/keys"&gt;GitHub&lt;/a&gt; for testing purposes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you'd like to only use 2 droplets, you can ignore the third node as it will be a PostgreSQL witness.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Note: If you use an SSH private key which is shared publicly on the internet, your cluster can get hacked.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G05kGxMr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/08/jscrambler-blog-connecting-sequelize-to-postgresql-cluster-create-3-droplets.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G05kGxMr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/08/jscrambler-blog-connecting-sequelize-to-postgresql-cluster-create-3-droplets.png" alt="jscrambler-blog-connecting-sequelize-to-postgresql-cluster-create-3-droplets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Assign a floating IP to your primary node
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oBVaIsRp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/08/jscrambler-blog-connecting-sequelize-to-postgresql-cluster-assign-floating-ip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oBVaIsRp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/08/jscrambler-blog-connecting-sequelize-to-postgresql-cluster-assign-floating-ip.png" alt="jscrambler-blog-connecting-sequelize-to-postgresql-cluster-assign-floating-ip"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a floating IP address and assign it to your primary node (pg1).&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure PostgreSQL with repmgr
&lt;/h3&gt;

&lt;p&gt;As previously stated, you can use the &lt;a href="https://blog.jscrambler.com/how-to-automate-postgresql-and-repmgr-on-vagrant/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=connecting-sequelize"&gt;Ansible playbook from the last post&lt;/a&gt; to speed up the configuration. Download it from &lt;a href="https://github.com/JscramblerBlog/postgres-repmgr-vagrant"&gt;GitHub&lt;/a&gt; and insert your gateway and droplets IPv4 addresses on &lt;code&gt;group_vars/all.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;client_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;your_gateway_public_ipv4&amp;gt;"&lt;/span&gt;
&lt;span class="na"&gt;node1_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;droplet_pg1_ipv4&amp;gt;"&lt;/span&gt;
&lt;span class="na"&gt;node2_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;droplet_pg2_ipv4&amp;gt;"&lt;/span&gt;
&lt;span class="na"&gt;node3_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;droplet_pg3_ipv4&amp;gt;"&lt;/span&gt;
&lt;span class="na"&gt;pg_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;12"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: I am assuming you will run your app locally on your computer and it will connect to your droplets through your network gateway&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you don't know your current public gateway address, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;curl ifconfig.io &lt;span class="nt"&gt;-4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create an &lt;strong&gt;Ansible&lt;/strong&gt; inventory file and add the playbook &lt;code&gt;host_vars&lt;/code&gt; for each host. I named mine &lt;code&gt;digitalocean&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[all]
pg1 ansible_host=&amp;lt;droplet_pg1_ipv4&amp;gt; connection_host="&amp;lt;droplet_pg1_ipv4&amp;gt;" node_id=1 role="primary"
pg2 ansible_host=&amp;lt;droplet_pg2_ipv4&amp;gt; connection_host="&amp;lt;droplet_pg2_ipv4&amp;gt;" node_id=2 role="standby"
pg3 ansible_host=&amp;lt;droplet_pg3_ipv4&amp;gt; connection_host="&amp;lt;droplet_pg3_ipv4&amp;gt;" node_id=3 role="witness"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Add the droplets to the list of SSH known hosts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh root@&amp;lt;droplet_pg1_ipv4&amp;gt; &lt;span class="nb"&gt;exit
&lt;/span&gt;ssh root@&amp;lt;droplet_pg2_ipv4&amp;gt; &lt;span class="nb"&gt;exit
&lt;/span&gt;ssh root@&amp;lt;droplet_pg3_ipv4&amp;gt; &lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, run the playbook with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;ansible-playbook playbook.yaml &lt;span class="nt"&gt;-i&lt;/span&gt; digitalocean &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"ansible_ssh_user=root"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; argument tells &lt;strong&gt;Ansible&lt;/strong&gt; to run on the hosts we specified&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-e "ansible_ssh_user=root”&lt;/code&gt; passes an environment variable to make &lt;strong&gt;Ansible&lt;/strong&gt; connect as the &lt;code&gt;root&lt;/code&gt; user.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  NodeJS application
&lt;/h3&gt;

&lt;p&gt;Let's write a simple app that manipulates a &lt;code&gt;countries&lt;/code&gt; table. Keep in mind &lt;a href="https://sequelize.org/master/manual/model-basics.html"&gt;pluralization in Sequelize&lt;/a&gt; for JavaScript objects and default database table names. Set it up with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;sequelize-postgresql-cluster
&lt;span class="nb"&gt;cd &lt;/span&gt;sequelize-postgresql-cluster
npm init &lt;span class="nt"&gt;-y&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;pg sequelize
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, edit the &lt;code&gt;index.js&lt;/code&gt; with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Sequelize&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sequelize&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;primary_ipv4&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;droplet_pg1_ipv4&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;standby_ipv4&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;droplet_pg2_ipv4&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="c1"&gt;// new Sequelize(database, username, password)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sequelize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Sequelize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;123456&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;dialect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;replication&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;read&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;standby_ipv4&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;primary_ipv4&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="c1"&gt;// witness node has no data, only metadata&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;write&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;primary_ipv4&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;idle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// connect to DB&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Checking database connection...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sequelize&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;authenticate&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Connection has been established successfully.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unable to connect to the database:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The code above created a &lt;strong&gt;Sequelize&lt;/strong&gt; connection object named &lt;code&gt;sequelize&lt;/code&gt; and configured our servers’ addresses in it. The &lt;code&gt;connect&lt;/code&gt; function tests the connection to the database. Make sure your app can connect to it correctly before proceeding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// model&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Country&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;sequelize&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;define&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Country&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;country_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Sequelize&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;INTEGER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;autoIncrement&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;primaryKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Sequelize&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;is_eu_member&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Sequelize&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;BOOLEAN&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;timestamps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;create_table&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sequelize&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sync&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;force&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;create table countries&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// insert country&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;insertCountry&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Country&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Portugal&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;is_eu_member&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pt created - country_id: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;country_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// select all countries&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;findAllCountries&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;countries&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Country&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;findAll&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;All countries:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;countries&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;create_table&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;insertCountry&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;findAllCountries&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sequelize&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Country&lt;/code&gt; is our &lt;strong&gt;Sequelize&lt;/strong&gt; model, a JavaScript object which represents the database table.&lt;br&gt;
&lt;code&gt;create_table()&lt;/code&gt;, &lt;code&gt;insertCountry()&lt;/code&gt; and &lt;code&gt;findAllCountries()&lt;/code&gt; functions are self-explanatory. They will be called through the &lt;code&gt;run()&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;Run your app with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;node index.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will create the &lt;code&gt;countries&lt;/code&gt; table on the &lt;strong&gt;PostgreSQL&lt;/strong&gt; database, insert a row in it, and read table data. Because of streaming replication, this data will automatically be replicated into the standby node.&lt;/p&gt;

&lt;h3&gt;
  
  
  (Optional) Current status primary failure test
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;If you perform this step, you'll need to revert the PostgreSQL promotion and go back to the cluster’s initial state. There are instructions for this in the &lt;a href="https://blog.jscrambler.com/how-to-automate-postgresql-and-repmgr-on-vagrant/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=connecting-sequelize"&gt;mentioned tutorial&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Turn off your &lt;code&gt;pg1&lt;/code&gt; droplet (this can be done through Digital Ocean’s interface). Due to &lt;code&gt;repmgrd&lt;/code&gt; configuration, the standby node (&lt;code&gt;pg2&lt;/code&gt;) promotes itself to the primary role, so your database cluster keeps working. This promotion will make your app still able to read data, but not write. Proceed by reverting the cluster to the previous status, with &lt;code&gt;pg1&lt;/code&gt; being the primary node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use a floating IP
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Add the floating IP address to your app database connection object
&lt;/h3&gt;

&lt;p&gt;To take advantage of floating IP, insert it into a variable and edit the write object of the &lt;code&gt;sequelize&lt;/code&gt; object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// insert this line&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;floating_ipv4&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your_floating_ip_goes_here&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;span class="c1"&gt;// change primary_ipv4 to floating_ipv4&lt;/span&gt;
&lt;span class="nx"&gt;write&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;floating_ipv4&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Digital Ocean CLI configuration
&lt;/h2&gt;

&lt;p&gt;As we will configure &lt;code&gt;pg2&lt;/code&gt; node to interact with Digital Ocean and reassign the floating IP to its IPv4 address, we must configure &lt;code&gt;doctl&lt;/code&gt; in this server. Access &lt;code&gt;pg2&lt;/code&gt; and do as following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# as superuser&lt;/span&gt;
curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://github.com/digitalocean/doctl/releases/download/v1.46.0/doctl-1.46.0-linux-amd64.tar.gz | &lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzv&lt;/span&gt;
&lt;span class="nb"&gt;sudo mv&lt;/span&gt; ~/doctl /usr/local/bin
&lt;span class="c"&gt;# as postgres&lt;/span&gt;
doctl auth init
&lt;span class="c"&gt;# insert Digital Ocean API token&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: If using in production, secure the API token variable in Digital Ocean’s CLI configuration script and be careful with reassigning script permissions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Place the script below on &lt;code&gt;/var/lib/postgresql/promote-standby.sh&lt;/code&gt; with execution privileges. It promotes the standby node to primary, validates &lt;code&gt;doctl&lt;/code&gt; project configuration and reassigns the floating IP to &lt;code&gt;pg2&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="c"&gt;# assign digital ocean floating ip address to postgres cluster promoted standby node&lt;/span&gt;
&lt;span class="c"&gt;# this script is expected to run automatically on a standby node during its automated promotion&lt;/span&gt;

&lt;span class="c"&gt;# promote PostgreSQL standby to primary&lt;/span&gt;
repmgr standby promote &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/repmgr.conf

&lt;span class="nv"&gt;PROJECT_EXISTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;doctl projects list | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; 2 &lt;span class="nt"&gt;-gt&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_EXISTS&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"doctl CLI is not properly configured. Exiting."&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nv"&gt;CURRENT_NODE_ASSIGNED_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;doctl compute floating-ip list | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $4}'&lt;/span&gt; | &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 1&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# pg1&lt;/span&gt;
&lt;span class="nv"&gt;STANDBY_NODE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;doctl compute droplet list | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"pg2"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $2}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# pg2&lt;/span&gt;
&lt;span class="nv"&gt;STANDBY_NODE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;doctl compute droplet list | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"pg2"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $1}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# &amp;lt;do droplet resource id&amp;gt;&lt;/span&gt;
&lt;span class="nv"&gt;FLOATING_IP_ADDRESS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;doctl compute floating-ip list | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $1}'&lt;/span&gt; | &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 1&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# &amp;lt;do flip ipv4&amp;gt;&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FLOATING_IP_ADDRESS&lt;/span&gt;&lt;span class="s2"&gt; is currently assigned to &lt;/span&gt;&lt;span class="nv"&gt;$CURRENT_NODE_ASSIGNED_NAME&lt;/span&gt;&lt;span class="s2"&gt;. Reassigning to &lt;/span&gt;&lt;span class="nv"&gt;$STANDBY_NODE_NAME&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;

&lt;span class="c"&gt;# remote address change&lt;/span&gt;
doctl compute floating-ip-action assign &lt;span class="nv"&gt;$FLOATING_IP_ADDRESS&lt;/span&gt; &lt;span class="nv"&gt;$STANDBY_NODE_ID&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Add the script to the repmgr promote command
&lt;/h3&gt;

&lt;p&gt;Now edit &lt;code&gt;pg2&lt;/code&gt; &lt;code&gt;repmgr.conf&lt;/code&gt; file to invoke our &lt;code&gt;promote-standby.sh&lt;/code&gt; script on promotion time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;promote_command = '/var/lib/postgresql/promote-standby.sh'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;service postgresql restart &amp;amp;&amp;amp; repmgrd&lt;/code&gt; to apply changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final status primary failure test
&lt;/h2&gt;

&lt;p&gt;Unlike before, when you turn off &lt;code&gt;pg1&lt;/code&gt;, &lt;code&gt;pg2&lt;/code&gt; not only promotes itself but also takes over the floating IP, which the app is currently using to perform write operations. As &lt;code&gt;pg2&lt;/code&gt; was already in the &lt;code&gt;sequelize&lt;/code&gt; variable’s &lt;code&gt;read&lt;/code&gt; array, it is now capable and the sole responsible for data reads and writes. Wait a minute for the promotion to happen and test the app again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;node index.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Picture yourself in a boat on a river (yes, it's a Beatles reference). If both your oars break loose and only one can be fixed on the spot, the boat motion will become defective and it will be hard to continue the trip.&lt;/p&gt;

&lt;p&gt;In our specific case, before having a floating IP, your app would recover data read capability through database fault-tolerance behavior - but it wouldn't be able to perform writes in this condition. Now that your app follows the database's new primary node on automatic promotions, you can heal the cluster and revert it to the initial state in planned conditions and with no rush, as app features are safeguarded.&lt;/p&gt;

&lt;p&gt;You can find the source code in this post on &lt;a href="https://github.com/JscramblerBlog/sequelize-postgres-flip"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>node</category>
      <category>devops</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How To Automate PostgreSQL and repmgr on Vagrant</title>
      <dc:creator>Rui Trigo</dc:creator>
      <pubDate>Tue, 28 Jul 2020 12:33:05 +0000</pubDate>
      <link>https://dev.to/jscrambler/how-to-automate-postgresql-and-repmgr-on-vagrant-omp</link>
      <guid>https://dev.to/jscrambler/how-to-automate-postgresql-and-repmgr-on-vagrant-omp</guid>
      <description>&lt;p&gt;I often get asked if it's possible to build a resilient system with PostgreSQL.&lt;/p&gt;

&lt;p&gt;Considering that resilience should feature cluster high-availability, fault tolerance, and self-healing, it's not an easy answer. But there is a lot to be said about this.&lt;/p&gt;

&lt;p&gt;As of today, we can't achieve that level of resilience with the same ease as MongoDB built-in features. But let's see what we can in fact do with the help of &lt;strong&gt;repmgr&lt;/strong&gt; and some other tooling.&lt;/p&gt;

&lt;p&gt;At the end of this exercise, we will have achieved some things that come in handy, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a few Ansible roles that can be reused for production&lt;/li&gt;
&lt;li&gt;a Vagrantfile for single-command cluster deployment&lt;/li&gt;
&lt;li&gt;a development environment that’s more realistic; being close to production state is good to foresee "production-exclusive issues"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;build a local development environment PostgreSQL cluster with fault tolerance capabilities;&lt;/li&gt;
&lt;li&gt;develop configuration management code to reuse in production.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;Install &lt;a href="https://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt;, &lt;a href="https://www.virtualbox.org/"&gt;VirtualBox&lt;/a&gt; and &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;vagrant
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;virtualbox &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;virtualbox-dkms
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: An alternative to installing Ansible on your host machine would be using the &lt;code&gt;ansible-local&lt;/code&gt; Vagrant provider, which needs Ansible installed on the generated virtual machine instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Write a Vagrantfile
&lt;/h3&gt;

&lt;p&gt;You can use &lt;code&gt;vagrant init&lt;/code&gt; to generate the file or simply create it and insert our first blocks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="s2"&gt;"node&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;false&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu/bionic64"&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"node&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="ss"&gt;:private_network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"172.16.1.1&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="ss"&gt;:virtualbox&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
        &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"node&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go block by block:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the 1st block is where we set up the Vagrant version;&lt;/li&gt;
&lt;li&gt;on the 2nd block, we iterate the following code so we reuse it to generate 3 equal VMs;&lt;/li&gt;
&lt;li&gt;OS, hostname and network settings are set in the 3rd block;&lt;/li&gt;
&lt;li&gt;the 4th block contains VirtualBox specific settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can create the servers with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create all 3 VMs&lt;/span&gt;
vagrant up
&lt;span class="c"&gt;# or create only a specific VM&lt;/span&gt;
vagrant up node1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Add a provisioner
&lt;/h3&gt;

&lt;p&gt;Just by doing the first step alone, we can already launch 3 working virtual machines. A little exciting, but the best is yet to come.&lt;/p&gt;

&lt;p&gt;Launching virtual machines is a nice feature of Vagrant, but we want these servers to have &lt;strong&gt;PostgreSQL&lt;/strong&gt; and &lt;strong&gt;repmgr&lt;/strong&gt; configured, so we will use configuration management software to help us. This is the moment &lt;strong&gt;Ansible&lt;/strong&gt; walks in to amaze us.&lt;/p&gt;

&lt;p&gt;Vagrant supports several providers, two of them being &lt;a href="https://www.vagrantup.com/docs/provisioning/ansible.html"&gt;Ansible&lt;/a&gt; and &lt;a href="https://www.vagrantup.com/docs/provisioning/ansible_local"&gt;Ansible Local&lt;/a&gt;. The difference between them is where Ansible runs, or in other words, where it must be installed. By Vagrant terms, the Ansible provider runs on a host machine (your computer) and the Ansible Local provider runs on guest machines (virtual machines). As we already installed Ansible in the prerequisites section, we'll go with the first option.&lt;/p&gt;

&lt;p&gt;Let's add a block for this provisioner in our &lt;code&gt;Vagrantfile&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="s2"&gt;"node&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;false&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu/bionic64"&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"node&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="ss"&gt;:private_network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s2"&gt;"172.16.1.1&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

      &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="ss"&gt;:virtualbox&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
        &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"node&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;

      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
        &lt;span class="n"&gt;define&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="ss"&gt;:ansible&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;ansible&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
          &lt;span class="n"&gt;ansible&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;limit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"all"&lt;/span&gt;
          &lt;span class="n"&gt;ansible&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;playbook&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"provisioning/playbook.yaml"&lt;/span&gt;

          &lt;span class="n"&gt;ansible&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;host_vars&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"node1"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:connection_host&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"172.16.1.11"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="ss"&gt;:node_id&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="ss"&gt;:role&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"primary"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;

            &lt;span class="s2"&gt;"node2"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:connection_host&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"172.16.1.12"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="ss"&gt;:node_id&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="ss"&gt;:role&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"standby"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;

            &lt;span class="s2"&gt;"node3"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:connection_host&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"172.16.1.13"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="ss"&gt;:node_id&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="ss"&gt;:role&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"witness"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;

    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ansible&lt;/strong&gt; allows us to configure several servers simultaneously. To take advantage of this feature on &lt;strong&gt;Vagrant&lt;/strong&gt;, we add &lt;code&gt;ansible.limit = "all"&lt;/code&gt; and must wait until all 3 VMs are up. &lt;strong&gt;Vagrant&lt;/strong&gt; knows they are all created because of the condition &lt;code&gt;if n == 3&lt;/code&gt;, which makes &lt;strong&gt;Ansible&lt;/strong&gt; only run after &lt;strong&gt;Vagrant&lt;/strong&gt; iterated 3 times.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ansible.playbook&lt;/code&gt; is the configuration entry point and &lt;code&gt;ansible.host_vars&lt;/code&gt; contains the &lt;strong&gt;Ansible&lt;/strong&gt; host variables to be used on the tasks and templates we are about to create.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create an organized Ansible folder structure
&lt;/h3&gt;

&lt;p&gt;If you're already familiar with &lt;strong&gt;Ansible&lt;/strong&gt;, there's little to learn in this section. For those who aren't, it doesn't get too complicated.&lt;/p&gt;

&lt;p&gt;First, we have a folder for all Ansible files, named &lt;code&gt;provisioning&lt;/code&gt;. Inside this folder, we have our aforementioned entry point &lt;code&gt;playbook.yaml&lt;/code&gt;, a &lt;code&gt;group_vars&lt;/code&gt; folder for &lt;strong&gt;Ansible&lt;/strong&gt; group variables, and a &lt;code&gt;roles&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;We could have all &lt;strong&gt;Ansible&lt;/strong&gt; tasks within &lt;code&gt;playbook.yaml&lt;/code&gt;, but role folder structure helps with organization. You can read the &lt;a href="https://docs.ansible.com/ansible/2.9/user_guide/playbooks_best_practices.html#directory-layout"&gt;Ansible documentation&lt;/a&gt; to learn the best practices. Below, you will find the folder structure for this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;project_root
| provisioning
|  |  group_vars
|  |  |  all.yaml
|  |  roles
|  |  |  postgres_12
|  |  |  registration
|  |  |  repmgr
|  |  |  ssh
|  |  playbook.yaml
|  Vagrantfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Ansible roles
&lt;/h3&gt;

&lt;h4&gt;
  
  
  4.1 PostgreSQL role
&lt;/h4&gt;

&lt;p&gt;To configure &lt;code&gt;repmgr&lt;/code&gt; on PostgreSQL, we need to edit two well-known PostgreSQL configuration files: &lt;code&gt;postgresql.conf&lt;/code&gt; and &lt;code&gt;pg_hba.conf&lt;/code&gt;. We will then write our tasks to apply the configurations on &lt;code&gt;tasks/main.yaml&lt;/code&gt;. I named the PostgreSQL role folder as &lt;code&gt;postgres_12&lt;/code&gt; but you can easily use another version if you want to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres_12
|  tasks
|  |  main.yaml
|  templates
|  |  pg_hba.conf.j2
|  |  pg_hba.conf.j2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can reuse the default file which comes with PostgreSQL installation and add the following lines to whitelist &lt;code&gt;repmgr&lt;/code&gt; database sessions from your trusted VMs. Create an Ansible template file (&lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_templating.html"&gt;Jinja2 format&lt;/a&gt;) like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jinja"&gt;&lt;code&gt;# default configuration (...)

# repmgr
local   replication   repmgr                              trust
host    replication   repmgr      127.0.0.1/32            trust
host    replication   repmgr      &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;node1_ip&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;/32       trust
host    replication   repmgr      &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;node2_ip&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;/32       trust
host    replication   repmgr      &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;node3_ip&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;/32       trust

local   repmgr        repmgr                              trust
host    repmgr        repmgr      127.0.0.1/32            trust
host    repmgr        repmgr      &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;node1_ip&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;/32       trust
host    repmgr        repmgr      &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;node2_ip&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;/32       trust
host    repmgr        repmgr      &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;node3_ip&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;/32       trust
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the same fashion as &lt;code&gt;pg_hba.conf&lt;/code&gt;, you can reuse the &lt;code&gt;postgresql.conf&lt;/code&gt; default file and add a few more replication related settings to the bottom of the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jinja"&gt;&lt;code&gt;# default configuration (...)

# repmgr
listen_addresses = '*'
shared_preload_libraries = 'repmgr'
wal_level = replica
max_wal_senders = 5
wal_keep_segments = 64
max_replication_slots = 5
hot_standby = on
wal_log_hints = on
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tasks below will install PostgreSQL and apply our configurations. Their names are self-explanatory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add PostgreSQL apt key&lt;/span&gt;
  &lt;span class="na"&gt;apt_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://www.postgresql.org/media/keys/ACCC4CF8.asc&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add PostgreSQL repository&lt;/span&gt;
  &lt;span class="na"&gt;apt_repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ansible_distribution_release = xenial, bionic, focal&lt;/span&gt;
    &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deb http://apt.postgresql.org/pub/repos/apt/ {{ ansible_distribution_release }}-pgdg main&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install PostgreSQL &lt;/span&gt;&lt;span class="m"&gt;12&lt;/span&gt;
  &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql-12&lt;/span&gt;
    &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy database configuration&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;full_postgresql.conf.j2&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/postgresql/12/main/postgresql.conf&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0644'&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy user access configuration&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pg_hba.conf.j2&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/postgresql/12/main/pg_hba.conf&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0640'&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4.2 SSH server configuration
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh
|  files
|  |  keys
|  |   |  id_rsa
|  |   |  id_rsa.pub
|  tasks
|  |  main.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate a key pair to use throughout our virtual machines to allow access to them. If you don't know how to do it, &lt;a href="https://docs.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#generating-a-new-ssh-key"&gt;this link can help&lt;/a&gt;. Just make sure the keys file paths match the paths in the next step.&lt;/p&gt;

&lt;p&gt;The tasks below will install the OpenSSH server and apply our configurations. Their names are self-explanatory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install OpenSSH&lt;/span&gt;
  &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openssh-server&lt;/span&gt;
    &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create postgres SSH directory&lt;/span&gt;
  &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0755'&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/postgresql/.ssh/&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;directory&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy SSH private key&lt;/span&gt;
  &lt;span class="na"&gt;copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;keys/id_rsa"&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/postgresql/.ssh/id_rsa&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0600'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy SSH public key&lt;/span&gt;
  &lt;span class="na"&gt;copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;keys/id_rsa.pub"&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/postgresql/.ssh/id_rsa.pub&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0644'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add key to authorized keys file&lt;/span&gt;
  &lt;span class="na"&gt;authorized_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('file',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'keys/id_rsa.pub')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Restart SSH service&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sshd&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restarted&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4.3 repmgr installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;repmgr
|  tasks
|  |  main.yaml
|  templates
|  |  repmgr.conf.j2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We configure settings like promote command, follow command, timeouts and retry count on failure scenarios inside &lt;code&gt;repmgr.conf&lt;/code&gt;. We will copy this file to its default directory &lt;code&gt;/etc&lt;/code&gt; to avoid passing the &lt;code&gt;-f&lt;/code&gt; argument on the &lt;code&gt;repmgr&lt;/code&gt; command all the time.&lt;/p&gt;

&lt;p&gt;The tasks below will install &lt;code&gt;repmgr&lt;/code&gt; and apply our configurations. Their names are self-explanatory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Download repmgr repository installer&lt;/span&gt;
  &lt;span class="na"&gt;get_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/repmgr-installer.sh&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0700&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://dl.2ndquadrant.com/default/release/get/deb&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Execute repmgr repository installer&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/repmgr-installer.sh&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install repmgr for PostgreSQL {{ pg_version }}&lt;/span&gt;
  &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql-{{ pg_version }}-repmgr&lt;/span&gt;
    &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup repmgr user and database&lt;/span&gt;
  &lt;span class="na"&gt;become_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;ignore_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;createuser --replication --createdb --createrole --superuser repmgr &amp;amp;&amp;amp;&lt;/span&gt;
    &lt;span class="s"&gt;psql -c 'ALTER USER repmgr SET search_path TO repmgr_test, "$user", public;' &amp;amp;&amp;amp;&lt;/span&gt;
    &lt;span class="s"&gt;createdb repmgr --owner=repmgr&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Copy repmgr configuration&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repmgr.conf.j2&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/repmgr.conf&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Restart PostgreSQL&lt;/span&gt;
  &lt;span class="na"&gt;systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restarted&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4.4 repmgr node registration
&lt;/h4&gt;

&lt;p&gt;Finally, we reach the moment where fault tolerance is established.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;registration
|  tasks
|  |  main.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;node_id = {{ node_id }}&lt;/span&gt;
&lt;span class="s"&gt;node_name = 'node{{ node_id }}'&lt;/span&gt;
&lt;span class="s"&gt;conninfo = 'host={{ connection_host }} user=repmgr dbname=repmgr'&lt;/span&gt;
&lt;span class="s"&gt;data_directory = '/var/lib/postgresql/{{ pg_version }}/main'&lt;/span&gt;
&lt;span class="s"&gt;use_replication_slots = yes&lt;/span&gt;
&lt;span class="s"&gt;reconnect_attempts = &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;
&lt;span class="s"&gt;reconnect_interval = &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="s"&gt;failover = automatic&lt;/span&gt;
&lt;span class="s"&gt;pg_bindir = '/usr/lib/postgresql/{{ pg_version }}/bin'&lt;/span&gt;
&lt;span class="s"&gt;promote_command = 'repmgr standby promote -f /etc/repmgr.conf'&lt;/span&gt;
&lt;span class="s"&gt;follow_command = 'repmgr standby follow -f /etc/repmgr.conf'&lt;/span&gt;
&lt;span class="s"&gt;log_level = INFO&lt;/span&gt;
&lt;span class="s"&gt;log_file = '/var/log/postgresql/repmgr.log'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This role was built according to the &lt;code&gt;repmgr&lt;/code&gt; documentation and it might be the most complex role, as it needs to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run some commands as root and others as Postgres;&lt;/li&gt;
&lt;li&gt;stop services between reconfigurations;&lt;/li&gt;
&lt;li&gt;have different tasks for primary, standby, and support &lt;a href="https://repmgr.org/docs/current/repmgr-witness-register.html"&gt;witness&lt;/a&gt; role configuration (in case you want node3 to also be a standby node, just assign &lt;code&gt;role: standby&lt;/code&gt; in Vagrantfile &lt;code&gt;ansible.host_vars&lt;/code&gt;)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Register primary node&lt;/span&gt;
  &lt;span class="na"&gt;become_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repmgr primary register&lt;/span&gt;
  &lt;span class="na"&gt;ignore_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;role == "primary"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Stop PostgreSQL&lt;/span&gt;
  &lt;span class="na"&gt;systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stopped&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;role == "standby"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Clean up PostgreSQL data directory&lt;/span&gt;
  &lt;span class="na"&gt;become_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/postgresql/{{ pg_version }}/main&lt;/span&gt;
    &lt;span class="na"&gt;force&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;absent&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;role == "standby"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Clone primary node data&lt;/span&gt;
  &lt;span class="na"&gt;become_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repmgr -h {{ node1_ip }} -U repmgr -d repmgr standby clone&lt;/span&gt;
  &lt;span class="na"&gt;ignore_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;role == "standby"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start PostgreSQL&lt;/span&gt;
  &lt;span class="na"&gt;systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;role == "standby"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Register {{ role }} node&lt;/span&gt;
  &lt;span class="na"&gt;become_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repmgr {{ role }} register -F&lt;/span&gt;
  &lt;span class="na"&gt;ignore_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;role != "primary"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start repmgrd&lt;/span&gt;
  &lt;span class="na"&gt;become_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repmgrd&lt;/span&gt;
  &lt;span class="na"&gt;ignore_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Set group variables
&lt;/h2&gt;

&lt;p&gt;Create a file &lt;code&gt;group_vars/all.yaml&lt;/code&gt; to set your VMs IP addresses and the PostgreSQL version you would like to use. Like &lt;code&gt;host_vars&lt;/code&gt; set on &lt;code&gt;Vagrantfile&lt;/code&gt;, these variables will be placed in the templates placeholders.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;client_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.1.1"&lt;/span&gt;
&lt;span class="na"&gt;node1_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.1.11"&lt;/span&gt;
&lt;span class="na"&gt;node2_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.1.12"&lt;/span&gt;
&lt;span class="na"&gt;node3_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;172.16.1.13"&lt;/span&gt;
&lt;span class="na"&gt;pg_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;12"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Put all pieces together with a playbook
&lt;/h2&gt;

&lt;p&gt;The only thing missing is the playbook itself. Create a file named &lt;code&gt;playbook.yaml&lt;/code&gt; and invoke the roles we have been developing. &lt;code&gt;gather_facts&lt;/code&gt; is an &lt;strong&gt;Ansible&lt;/strong&gt; property to fetch operative system data like distribution (&lt;code&gt;ansible_distribution_release&lt;/code&gt;) among other useful variables. You can also read these variables with the &lt;a href="https://docs.ansible.com/ansible/latest/modules/setup_module.html"&gt;Ansible setup module&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgres_12&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ssh&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;repmgr&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;registration&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Start cluster
&lt;/h2&gt;

&lt;p&gt;It's finished. You can now start your cluster with &lt;code&gt;vagrant up&lt;/code&gt; and then perform your connections and failover tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing cluster failover
&lt;/h2&gt;

&lt;p&gt;Now that our cluster is up and configured, you can start by shutting down your standby node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# save standby state and shut it down ungracefully&lt;/span&gt;
vagrant &lt;span class="nb"&gt;suspend &lt;/span&gt;node2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see that the cluster is operating normally. Bring the standby node back and it will stay that way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# bring standby back online after suspension&lt;/span&gt;
vagrant resume node1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How about taking down the primary node?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# save primary state and shut it down ungracefully&lt;/span&gt;
vagrant &lt;span class="nb"&gt;suspend &lt;/span&gt;node1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, as &lt;code&gt;repmgrd&lt;/code&gt; is enabled, the standby node will retry connecting to the primary node the configured number of times (&lt;code&gt;reconnect_attempts = 5&lt;/code&gt;) and, if it obtains no response, will promote itself to primary and take over write operations on the PostgreSQL cluster. Success!&lt;/p&gt;

&lt;p&gt;To join the cluster again, the old primary node will have to lose its current data, clone the new primary data, and register as a new standby.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant resume node1
vagrant ssh node1
service postgresql stop
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /var/lib/postgresql/12/main
repmgr &lt;span class="nt"&gt;-h&lt;/span&gt; 172.16.1.12 &lt;span class="nt"&gt;-U&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; repmgr standby clone
service postgresql start
repmgr standby register &lt;span class="nt"&gt;-F&lt;/span&gt;
repmgrd
repmgr service status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This last command shows us that the cluster is working properly, but with inverted roles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;postgres@node1:~&lt;span class="nv"&gt;$ &lt;/span&gt;repmgr service status
 ID | Name  | Role  | Status    | Upstream | repmgrd | PID   | Paused? | Upstream last seen
&lt;span class="nt"&gt;----&lt;/span&gt;+-------+---------+-----------+----------+---------+-------+---------+--------------------
 1  | node1 | standby |   running | node2   | running | 22490 | no      | n/a                
 2  | node2 | primary | &lt;span class="k"&gt;*&lt;/span&gt; running |         | running | 22548 | no      | 0 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; ago   
 3  | node3 | witness | &lt;span class="k"&gt;*&lt;/span&gt; running | node2   | running | 22535 | no      | 0 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; ago   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing wrong with this, but let's &lt;a href="https://repmgr.org/docs/current/repmgr-standby-switchover.html"&gt;make these nodes switch their roles&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ssh in and out just to add host key to known_hosts file&lt;/span&gt;
ssh &amp;lt;current_primary_ip_address&amp;gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;StrictHostKeyChecking&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;no
&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;span class="c"&gt;# trigger switchover on current standby&lt;/span&gt;
repmgr standby switchover &lt;span class="nt"&gt;--siblings-follow&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we're back to the initial state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;postgres@node1:~&lt;span class="nv"&gt;$ &lt;/span&gt;repmgr service status
 ID | Name  | Role  | Status    | Upstream | repmgrd | PID   | Paused? | Upstream last seen
&lt;span class="nt"&gt;----&lt;/span&gt;+-------+---------+-----------+----------+---------+-------+---------+--------------------
 1  | node1 | primary | &lt;span class="k"&gt;*&lt;/span&gt; running |         | running | 22490 | no      | n/a                
 2  | node2 | standby |   running | node1   | running | 22548 | no      | 0 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; ago   
 3  | node3 | witness | &lt;span class="k"&gt;*&lt;/span&gt; running | node1   | running | 22535 | no      | 0 second&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; ago   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;We managed to build a fault-tolerant PostgreSQL cluster using &lt;strong&gt;Vagrant&lt;/strong&gt; and &lt;strong&gt;Ansible&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;High availability is a big challenge. Much like life’s own matters, we are only prepared for the biggest challenges when we fit that challenges’ conditions.&lt;/p&gt;

&lt;p&gt;Production environment unique problems are natural and tough to guess. Bridging the gap between development and production is a way to prevent deployment/production issues. We can make some efforts toward that objective, and that is precisely what we just achieved with this high availability database setup.&lt;/p&gt;

&lt;p&gt;You can find the source code of this tutorial &lt;a href="https://github.com/JscramblerBlog/postgres-repmgr-vagrant"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>vagrant</category>
      <category>devops</category>
      <category>data</category>
    </item>
    <item>
      <title>How To Achieve Mongo Replication on Docker</title>
      <dc:creator>Rui Trigo</dc:creator>
      <pubDate>Thu, 21 May 2020 16:29:27 +0000</pubDate>
      <link>https://dev.to/jscrambler/how-to-achieve-mongo-replication-on-docker-1n97</link>
      <guid>https://dev.to/jscrambler/how-to-achieve-mongo-replication-on-docker-1n97</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/jscrambler/how-we-achieved-mongodb-replication-on-docker-34p7"&gt;previous post&lt;/a&gt;, we showed how we used MongoDB replication to solve several problems we were facing.&lt;/p&gt;

&lt;p&gt;Replication got to be a part of a bigger migration which brought stability, fault-tolerance, and performance to our systems. In this post, we will dive into the practical preparation of that migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;I noticed the lack of tutorials of setting up Mongo replication on Docker containers and wanted to fill this gap along with some tests to see how a Mongo cluster behaves on specific scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;p&gt;To improve our production database and solve the identified limitations, our most clear objectives at this point were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upgrading Mongo v3.4 and v3.6 instances to v4.2 (all community edition);&lt;/li&gt;
&lt;li&gt;Evolving Mongo data backup strategy from &lt;code&gt;mongodump&lt;/code&gt;/&lt;code&gt;mongorestore&lt;/code&gt; on a mirror server to Mongo Replication (active working backup server);&lt;/li&gt;
&lt;li&gt;Merging Mongo Docker containers into a single container and Mongo Docker volumes into a single volume.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Prepare applications for Mongo connection string change
&lt;/h3&gt;

&lt;p&gt;When our applications were developed, there was no need to pass the Mongo connection URI through a variable, as most of the time Mongo was deployed as a microservice in the same stack as the application containers. With the centralization of Mongo databases, this change was introduced in the application code to update the variable on our CI/CD software whenever we need.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Generate and deploy keyfiles
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/"&gt;MongoDB’s official documentation&lt;/a&gt; has step-by-step instructions on how to setup Keyfile authentication on a Mongo Cluster. Using keyfile authentication enforces &lt;a href="https://docs.mongodb.com/manual/core/security-transport-encryption/"&gt;Transport Encryption&lt;/a&gt; over SSL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl rand &lt;span class="nt"&gt;-base64&lt;/span&gt; 756 &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &amp;lt;path-to-keyfile&amp;gt;
&lt;span class="nb"&gt;chmod &lt;/span&gt;400 &amp;lt;path-to-keyfile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The keyfile is passed through a &lt;code&gt;keyfile&lt;/code&gt; argument on the &lt;code&gt;mongod&lt;/code&gt; command, as shown in the next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.mongodb.com/manual/tutorial/enable-authentication"&gt;User authentication&lt;/a&gt; and role management is out of the scope of this post, but if you are going to use it, configure it before proceeding beyond this step.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Deploy existing containers with the &lt;code&gt;replSet&lt;/code&gt; argument
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;mongod &lt;span class="nt"&gt;--keyfile&lt;/span&gt; /keyfile &lt;span class="nt"&gt;--replSet&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rs-myapp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Define ports
&lt;/h3&gt;

&lt;p&gt;Typically, in this step, you simply choose a server network port to serve your MongoDB. Mongo’s default port is 27017, but since in our case we had 4 apps in our production environment, we defined 4 host ports. You should always choose a network port per Mongo Docker container and stick with them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;27001 for app 1&lt;/li&gt;
&lt;li&gt;27002 for app 2&lt;/li&gt;
&lt;li&gt;27003 for app 3&lt;/li&gt;
&lt;li&gt;27004 for app 4&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At step 10, after having replication working, we'll only use and expose one port.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Assemble a cluster composed of 3 servers in different datacenters and regions
&lt;/h3&gt;

&lt;p&gt;Preferably, set up 3 servers on different datacenters, or different regions if possible. This will allow for inter-regional availability. Aside from latency changes, your system will survive datacenter blackouts and disasters.&lt;/p&gt;

&lt;p&gt;Why 3? It is the minimum number for a worthy Mongo cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 node: can't have high availability by itself;&lt;/li&gt;
&lt;li&gt;2 nodes: no automatic failover — when one of them fails, the other one can't elect itself as primary alone;&lt;/li&gt;
&lt;li&gt;3 nodes: minimum worth number — when one of them fails, the other two vote for the next primary node;&lt;/li&gt;
&lt;li&gt;4 nodes: has the same benefits as 3 nodes plus one extra copy of data (pricier);&lt;/li&gt;
&lt;li&gt;5 nodes: can withstand 2 nodes failure at the same time (even pricier).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are Mongo clusters with &lt;a href="https://docs.mongodb.com/manual/core/replica-set-arbiter/"&gt;arbiters&lt;/a&gt;, but that is out of the scope of this post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jscrambler.com/signup?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=how-to-achieve-mongodb"&gt;&lt;img alt="Try Jscrambler For Free" src="https://res.cloudinary.com/practicaldev/image/fetch/s--vh1vlUdJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.jscrambler.com/blog/V1-freetrial.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Define your replica set members’ priorities
&lt;/h3&gt;

&lt;p&gt;Adjust your priorities to your cluster size, hardware, location, or other useful criteria.&lt;/p&gt;

&lt;p&gt;In our case, we went for:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;appserver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="c1"&gt;// temporarily primary&lt;/span&gt;
&lt;span class="nx"&gt;node1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="c1"&gt;// designated primary&lt;/span&gt;
&lt;span class="nx"&gt;node2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="c1"&gt;// designated first secondary being promoted&lt;/span&gt;
&lt;span class="nx"&gt;node3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="c1"&gt;// designated second secondary being promoted&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We set the node which currently had the data with &lt;code&gt;priority: 10&lt;/code&gt;, since it had to be the primary in the sync phase, while the rest of the cluster is not ready. This allowed continuing serving database queries while data was being replicated.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Deploy Mongo containers scaling to N* on a Mongo cluster
&lt;/h3&gt;

&lt;p&gt;(*N being the number of Mongo cluster nodes).&lt;/p&gt;

&lt;p&gt;Use an orchestrator to deploy 4 Mongo containers in the environment, scaling to 3.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 is the number of different Mongo instances;&lt;/li&gt;
&lt;li&gt;3 is the number of Mongo cluster nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our case, this meant having 12 containers in the environment temporarily.&lt;/p&gt;


&lt;center&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R-MP1A78--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/05/jscrambler-blog-how-to-achieve-mongodb-replication-docker-env.png" alt="Environment Containers"&gt;&lt;/center&gt;

&lt;p&gt;Remember to deploy them as replica set members, as shown in step 3.&lt;/p&gt;
&lt;h3&gt;
  
  
  8. Replication time!
&lt;/h3&gt;

&lt;p&gt;This is the moment when we start watching database users and collection data getting synced. You can enter the &lt;code&gt;mongo&lt;/code&gt; shell of a Mongo container (preferably primary) to check the replication progress. These two commands will show you the status, priority and other useful info:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;rs.status&lt;span class="o"&gt;()&lt;/span&gt;
&lt;span class="c"&gt;# and&lt;/span&gt;
rs.conf&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When all members reach the secondary state, you can start testing. Stop the primary node to witness secondary promotion. This process is almost instantaneous.&lt;/p&gt;

&lt;p&gt;You can stop the primary member by issuing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop &amp;lt;mongo_docker_container_name_or_d&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When you bring it back online, the cluster will give back the primary role to the member with the highest &lt;code&gt;priority&lt;/code&gt;. This process takes a few seconds, as it is not critical.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker start &amp;lt;mongo_docker_container_name_or_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  9. Extract Mongo containers from application servers
&lt;/h3&gt;

&lt;p&gt;If everything is working at this point, you can stop the Mongo instance on which we previously set &lt;code&gt;priority: 10&lt;/code&gt; (stop command in the prior step) and &lt;a href="https://docs.mongodb.com/manual/reference/method/rs.remove/"&gt;remove that member from the replica set passing its hostname as parameter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Repeat this step for every Mongo container you had on step 4.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Migrate backups and change which server they read the data from
&lt;/h3&gt;

&lt;p&gt;As mentioned in the &lt;a href="https://blog.jscrambler.com/how-we-achieved-mongodb-replication-on-docker/"&gt;previous post&lt;/a&gt;, one handy feature of MongoDB replication is having a secondary member asking for data to &lt;code&gt;mongodump&lt;/code&gt; from another secondary member.&lt;/p&gt;

&lt;p&gt;Previously, we had the application + database server performing &lt;code&gt;mongodump&lt;/code&gt; of its data. As we moved the data to the cluster, we also moved the automated backup tools to a secondary member, to take advantage of said feature.&lt;/p&gt;

&lt;h3&gt;
  
  
  11. Merge data from 4 Mongo Docker containers into one database
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;If you only had 1 Mongo Docker container at the start, skip to step 12.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Besides having simplicity telling us to do this &lt;strong&gt;before&lt;/strong&gt; step 1, we decided to act cautiously and keep apps and databases working in a way as close as they were before until we mastered Mongo replication in our environment.&lt;/p&gt;

&lt;p&gt;At this stage, we chose to import data from all Mongo databases to a single Mongo database — the one which contained the most data. When working with MongoDB, remember this line from the &lt;a href="https://docs.mongodb.com/manual/core/databases-and-collections/"&gt;official docs&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In MongoDB, databases hold collections of documents.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That means we can take advantage of &lt;code&gt;mongodump --db &amp;lt;dbname&amp;gt;&lt;/code&gt; and &lt;code&gt;mongorestore --db &amp;lt;dbname&amp;gt;&lt;/code&gt; to merge Mongo data into the same instance (this goes for non-Docker as well).&lt;/p&gt;

&lt;h3&gt;
  
  
  12. Monitor cluster nodes and backups
&lt;/h3&gt;

&lt;p&gt;When you have merged your databases into the same instance, you will shut down other instances, right? Then, you will only need to monitor the application and perform backups of that same instance. Don't forget to monitor the new cluster hardware. Even with automatic fault-tolerance, it is not recommended to leave our systems short. As a hint, there is a &lt;a href="https://docs.mongodb.com/manual/reference/built-in-roles/#clusterMonitor"&gt;dedicated role for that&lt;/a&gt; called &lt;code&gt;clusterMonitor&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Sharing this story about our database migration will hopefully help the community — especially those not taking full benefits from MongoDB already — to start seeing MongoDB in a more mature and reliable way.&lt;/p&gt;

&lt;p&gt;Even though this is not a regular MongoDB replication "how-to" tutorial, this story shows important details about MongoDB’s internal features, our struggle to not leave any details behind, and, again, the benefits of such technology. That's what I believe technology is for — helping humans with their needs.&lt;/p&gt;




&lt;p&gt;In case you're also interested in learning more about application security, we recommend reading our free data sheet on &lt;a href="https://jscrambler.com/code-integrity/javascript-security-data-sheet?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=how-to-achieve-mongodb"&gt;JavaScript Security Threats&lt;/a&gt;, which provides an overview of the most relevant attacks to JavaScript apps and how to prevent them.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>docker</category>
      <category>devops</category>
      <category>data</category>
    </item>
    <item>
      <title>How We Achieved MongoDB Replication on Docker</title>
      <dc:creator>Rui Trigo</dc:creator>
      <pubDate>Tue, 21 Apr 2020 13:27:46 +0000</pubDate>
      <link>https://dev.to/jscrambler/how-we-achieved-mongodb-replication-on-docker-34p7</link>
      <guid>https://dev.to/jscrambler/how-we-achieved-mongodb-replication-on-docker-34p7</guid>
      <description>&lt;h2&gt;
  
  
  Prologue
&lt;/h2&gt;

&lt;p&gt;Picture your database server. Now imagine it somehow breaks. Despair comes up and disturbs the reaction.&lt;/p&gt;

&lt;p&gt;Maybe you lost data. Maybe you had too much downtime. Maybe you lost work hours. Maybe you lost precious time and money. High Availability is easily a nice-to-have but in times like these, you value it more.&lt;/p&gt;

&lt;p&gt;MongoDB comes with clustering features that provide more storage capacity (sharding) and more reliability (replication). This article will focus on MongoDB replication on Docker containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;We felt the need to improve our production database and data backup strategy as we identified it was giving the servers a hard time performance-wise and the disaster recovery process was very hard in most procedures.&lt;/p&gt;

&lt;p&gt;So, we started to design a migration plan to solve this. We also took the chance to update the Mongo version in use to benefit from new features and security improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before
&lt;/h2&gt;


&lt;center&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vjS4JWe8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/04/jscrambler-blog-how-we-achieved-mongo-db-replication-before.png" alt="Mongo Environment Before"&gt;&lt;/center&gt;
&lt;h3&gt;
  
  
  Old Production Environment
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Production servers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;server_1: application services containers + 2 mongo containers&lt;/li&gt;
&lt;li&gt;server_2: application services containers + 1 mongo containers&lt;/li&gt;
&lt;li&gt;server_3: application services containers + 1 mongo containers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Mirror servers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mirror_server_1: application services containers + 2 mongo containers (updated once a day)&lt;/li&gt;
&lt;li&gt;mirror_server_2: application services containers + 1 mongo containers (updated once a day)&lt;/li&gt;
&lt;li&gt;(services and data on server3 were not in mirror environment)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Mongo service was kept using a mongodump/mongorestore strategy.&lt;/p&gt;
&lt;h3&gt;
  
  
  mongodump/mongorestore Strategy
&lt;/h3&gt;

&lt;p&gt;The first part of the mongodump/mongorestore strategy is composed of Cron jobs that dump the data in the Mongo database to a different Mongo instance with the &lt;code&gt;mongodump&lt;/code&gt; utility.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;mongodump is a utility for creating a binary export of the contents of a database. mongodump can export data from either mongod or mongos instances; i.e. can export data from standalone, replica set, and sharded cluster deployments.&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;mongodump &lt;span class="nt"&gt;--host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb1.example.net &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3017 &lt;span class="nt"&gt;--username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;user &lt;span class="nt"&gt;--password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"pass"&lt;/span&gt; &lt;span class="nt"&gt;--out&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/opt/backup/mongodump-2013-10-24
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The command above outputs a Mongo data dump file named &lt;em&gt;mongodump-2013-10-24&lt;/em&gt; on &lt;em&gt;/opt/backup&lt;/em&gt; directory, from the connection to &lt;em&gt;mongodb1.example.com&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The second part of this strategy is restoring the second database by the data in the mongodump with mongorestore utility.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The mongorestore program loads data from either a binary database dump created by mongodump or the standard input (starting in version 3.0.0) into a mongod or mongos instance.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;mongorestore &lt;span class="nt"&gt;--host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mongodb1.example.net &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3017 &lt;span class="nt"&gt;--username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;user  &lt;span class="nt"&gt;--authenticationDatabase&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin /opt/backup/mongodump-2013-10-24
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The command above writes the data from &lt;em&gt;/opt/backup/mongodump-2013-10-24&lt;/em&gt; file to the Mongo instance on the &lt;em&gt;mongodb1.example.com&lt;/em&gt; connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WARNING&lt;/strong&gt;: The process of mongorestore utility is NOT incremental. Restoring a database will delete all data prior to writing the mongodump data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jscrambler.com/signup?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=how-we-achieved-mongodb"&gt;&lt;img alt="Try Jscrambler For Free" src="https://res.cloudinary.com/practicaldev/image/fetch/s--vh1vlUdJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://media.jscrambler.com/blog/V1-freetrial.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Problems and Limitations
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Late backup data&lt;/strong&gt;: Since mongodump ran daily at a specified time, the data from that time until the moment of the database switch would be lost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unavailability&lt;/strong&gt;: The mongodump and mongorestore utilities took several hours to complete in the biggest databases. During the DB restore, nothing could be done as the Mongo data can't be used until mongorestore is finished. The DB will only be available when this is completed. Also, switching from a production environment to a mirror environment was a manual process which took some time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High disk usage&lt;/strong&gt;: Restoring a whole database (or several DBs simultaneously) would take up disks inodes, as well as a toll of usage in your disks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability limitations&lt;/strong&gt;: Using a Mongo Docker instance for each database, even distributed by different servers, brought the need of setting up an instance, different network addresses and ports, and new backup containers (mongo-tools). A Mongo cluster would fit the needs for our applications and make database administration way simpler.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reserved memory&lt;/strong&gt;: By default, each Mongo container will try to cache all available memory until 60%. Since we previously had 1 Mongo container on two application servers and 2 containers in the same application server, all of them had at least 60% busy (in use + cached). Whenever there is more than one Mongo container, they will dispute all available memory to reach 60% each. (2 -&amp;gt; 120%, 3 -&amp;gt; 180%, 4 -&amp;gt; 240%, etc.). For these reasons, it is very important to set adequate container memory limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amount of Docker volumes&lt;/strong&gt;: MongoDB data, dumps, and metadata were scattered through several Docker volumes, mapped to different filesystem folders. Merging these databases would allow centralizing this data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and features&lt;/strong&gt;: Upgrading to Mongo 4 would solve security issues and bring more features to improve DB performance and replication, like &lt;a href="https://www.mongodb.com/blog/post/mongodb-40-nonblocking-secondary-reads"&gt;non-blocking secondary reads&lt;/a&gt;, &lt;a href="https://www.percona.com/blog/2019/08/16/long-awaited-mongodb-4-2-ga-has-landed/"&gt;transactions and flow control&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;p&gt;To improve our production database and solve the identified limitations, our most clear objectives at this point were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upgrading Mongo v3.4 and v3.6 instances to v4.2 (all community edition);&lt;/li&gt;
&lt;li&gt;Changing Mongo data backup strategy from &lt;strong&gt;mongodump/mongorestore&lt;/strong&gt; to Mongo Replication;&lt;/li&gt;
&lt;li&gt;Merging Mongo containers into a single container and Mongo Docker volumes into a single volume.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And to get to these objectives, we defined the following plan:&lt;/p&gt;

&lt;h3&gt;
  
  
  Plan topics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prepare applications for Mongo connection string change;&lt;/li&gt;
&lt;li&gt;Assemble a cluster composed of 3 servers in different datacenters and regions;&lt;/li&gt;
&lt;li&gt;Generate and deploy keyfiles on the filesystems;&lt;/li&gt;
&lt;li&gt;Redeploy existing Mongo Docker containers with &lt;em&gt;replSet&lt;/em&gt; argument;&lt;/li&gt;
&lt;li&gt;Define network ports;&lt;/li&gt;
&lt;li&gt;Deploy new 4 Mongo containers scaling to 3 (4 x 3 = 12) on a Mongo cluster;&lt;/li&gt;
&lt;li&gt;Add new Mongo instances to the replica set to sync from old Mongo containers;&lt;/li&gt;
&lt;li&gt;Stop Mongo containers from application servers and remove them from the replica set;&lt;/li&gt;
&lt;li&gt;Migrate backups and change which server they read the data;&lt;/li&gt;
&lt;li&gt;Merge data from 4 Mongo containers into one database;&lt;/li&gt;
&lt;li&gt;Unify backups.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;We will publish a second part of this tutorial soon, where we will go through each of these topics.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Some of the achieved results were:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fault-tolerance&lt;/strong&gt;: Automatic and instant primary database switch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data redundancy&lt;/strong&gt;: Instantaneously synced redundant data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inter-regional availability&lt;/strong&gt;: Location disaster safeguarding.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster hierarchy&lt;/strong&gt;: Mongo replication allows nodes priority configuration, which allows the user to order nodes by hardware power, location, or other useful criteria.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read operations balance&lt;/strong&gt;: Read operations can be balanced through secondary nodes, like dashboards queries and mongodumps. Applications can also be configured (through Mongo connection URI) to perform read operations from secondary nodes, which increases database read capacity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Now that memory used and cached is right for the system needs, Mongo databases are hosted in dedicated servers, its version got bumped and the cluster can balance read operations, performance improvements exceeded the expectations.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  New Production Environment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Production&lt;/strong&gt; application servers should connect to the Mongo Production Cluster using replica set;&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Mirror&lt;/strong&gt; application server should connect to the Mongo Production Cluster and keep storing the most recent &lt;strong&gt;mongodumps&lt;/strong&gt;;&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Mongo Cluster&lt;/strong&gt; secondary node should &lt;strong&gt;mongodump&lt;/strong&gt; the cluster data to the Mirror environment, asking for it to another secondary node.&lt;/li&gt;
&lt;/ul&gt;


&lt;center&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OyXtE5-l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.jscrambler.com/content/images/2020/04/jscrambler-blog-how-we-achieved-mongo-db-replication-after.png" alt="Mongo Environment After"&gt;&lt;/center&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This post is more than about MongoDB replication on Docker. It is about a victory in stopping the infrastructure growth going in the wrong direction and having things done the way we thought they should be.&lt;/p&gt;

&lt;p&gt;Much like a tree growing on a vase, we should plant it in a garden, where it can grow freely. Now, we will watch that tree scale without adding a new vase every time it needs to grow and not be afraid of breaking them. That's what high availability clusters are all about — building an abstract module for the application layer which can scale and keep being used the same way.&lt;/p&gt;

&lt;p&gt;The whole process was done with intervals between major steps, to allow checking if the new strategy was working for us. We are very glad to have all the problems in the &lt;strong&gt;before&lt;/strong&gt; section solved.&lt;/p&gt;

&lt;p&gt;Achieving this means that we are now prepared to scale easily and sleep well knowing that MongoDB has (at least) database fault-tolerance and recovers by itself instantaneously — which lowers the odds of disaster scenarios.&lt;/p&gt;

&lt;p&gt;Stay tuned for part 2, where we’ll explore the whole technical setup.&lt;/p&gt;




&lt;p&gt;Meanwhile, you may like our post about &lt;a href="https://blog.jscrambler.com/the-data-processing-holy-grail-row-vs-columnar-databases/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=how-we-achieved-mongodb"&gt;Row vs. Columnar Databases&lt;/a&gt; or our full-stack tutorial on &lt;a href="https://blog.jscrambler.com/how-to-create-a-public-file-sharing-service-with-vue-js-and-node-js/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=how-we-achieved-mongodb"&gt;Creating a Public File Sharing Service with Vue.js and Node.js&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another resource that you may find useful is our free data sheet on &lt;a href="https://jscrambler.com/code-integrity/javascript-security-data-sheet?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=how-we-achieved-mongodb"&gt;JavaScript Security Threats&lt;/a&gt;, which provides an overview of the most relevant attacks to JavaScript apps and how to prevent them.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>docker</category>
      <category>devops</category>
      <category>data</category>
    </item>
  </channel>
</rss>
