<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marco Villarreal</title>
    <description>The latest articles on DEV Community by Marco Villarreal (@mvillarrealb).</description>
    <link>https://dev.to/mvillarrealb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mvillarrealb"/>
    <language>en</language>
    <item>
      <title>Creating a Spark Standalone Cluster with Docker and docker-compose(2021 update)</title>
      <dc:creator>Marco Villarreal</dc:creator>
      <pubDate>Sun, 27 Jun 2021 01:53:25 +0000</pubDate>
      <link>https://dev.to/mvillarrealb/creating-a-spark-standalone-cluster-with-docker-and-docker-compose-2021-update-6l4</link>
      <guid>https://dev.to/mvillarrealb/creating-a-spark-standalone-cluster-with-docker-and-docker-compose-2021-update-6l4</guid>
      <description>&lt;p&gt;Back in 2018 I wrote &lt;a href="https://medium.com/@marcovillarreal_40011/creating-a-spark-standalone-cluster-with-docker-and-docker-compose-ba9d743a157f" rel="noopener noreferrer"&gt;this article&lt;/a&gt; on how to create a spark cluster with  docker and docker-compose, ever since then my humble repo got 270+ stars, a lot of forks and activity from the community, however I abandoned the project by some time(Was kinda busy with a new job on 2019 and some more stuff to take care of), I've merged some pull quest once in a while, but never put many attention on upgrading versions. &lt;/p&gt;

&lt;p&gt;But today we are going to revisit this old fella with some updates and hopefully run some examples with scala and python(yeah 2018 version didn't support python, thanks to the community to bring pyspark to this baby).&lt;/p&gt;

&lt;h1&gt;
  
  
  Requirements
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Docker (I am using version 20.10.7)&lt;/li&gt;
&lt;li&gt;docker-compose (I am using version 1.21.2)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mvillarrealb/docker-spark-cluster" rel="noopener noreferrer"&gt;This repo ;)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Project Structure
&lt;/h1&gt;

&lt;p&gt;The following project structure will be used&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;|
|--|apps &lt;span class="c"&gt;# Apps directory for volume mounts(any app you want to deploy just paste it here)&lt;/span&gt;
|--|data &lt;span class="c"&gt;# Data directory for volume mounts(any file you want to process just paste it here)&lt;/span&gt;
|--|Dockerfile &lt;span class="c"&gt;#Dockerfile used to build spark image&lt;/span&gt;
|--|start-spark.sh &lt;span class="c"&gt;# startup script used to run different spark workloads&lt;/span&gt;
|--|docker-compose.yml &lt;span class="c"&gt;# the compose file&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Creating The Image
&lt;/h1&gt;

&lt;p&gt;In the 2018 version, we've used a base image and a separated image for each spark workload(one image for the master one for the worker and one for spark-submit). In this new approach we will use docker multi stage builds to create a unique image that can be launched as any workload we want.&lt;/p&gt;

&lt;p&gt;Here's the dockerfile used to define our apache-spark image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;
&lt;span class="c"&gt;# builder step used to download and configure spark environment&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;openjdk:11.0.11-jre-slim-buster&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;

&lt;span class="c"&gt;# Add Dependencies for PySpark&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; curl vim wget software-properties-common ssh net-tools ca-certificates python3 python3-pip python3-numpy python3-matplotlib python3-scipy python3-pandas python3-simpy

&lt;span class="k"&gt;RUN &lt;/span&gt;update-alternatives &lt;span class="nt"&gt;--install&lt;/span&gt; &lt;span class="s2"&gt;"/usr/bin/python"&lt;/span&gt; &lt;span class="s2"&gt;"python"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;which python3&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 1

&lt;span class="c"&gt;# Fix the value of PYTHONHASHSEED&lt;/span&gt;
&lt;span class="c"&gt;# Note: this is needed when you use Python 3.3 or greater&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; SPARK_VERSION=3.0.2 \&lt;/span&gt;
HADOOP_VERSION=3.2 \
SPARK_HOME=/opt/spark \
PYTHONHASHSEED=1

&lt;span class="c"&gt;# Download and uncompress spark from the apache archive&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;wget &lt;span class="nt"&gt;--no-verbose&lt;/span&gt; &lt;span class="nt"&gt;-O&lt;/span&gt; apache-spark.tgz &lt;span class="s2"&gt;"https://archive.apache.org/dist/spark/spark-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SPARK_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/spark-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SPARK_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-bin-hadoop&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HADOOP_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.tgz"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /opt/spark &lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xf&lt;/span&gt; apache-spark.tgz &lt;span class="nt"&gt;-C&lt;/span&gt; /opt/spark &lt;span class="nt"&gt;--strip-components&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm &lt;/span&gt;apache-spark.tgz


&lt;span class="c"&gt;# Apache spark environment&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;apache-spark&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /opt/spark&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; SPARK_MASTER_PORT=7077 \&lt;/span&gt;
SPARK_MASTER_WEBUI_PORT=8080 \
SPARK_LOG_DIR=/opt/spark/logs \
SPARK_MASTER_LOG=/opt/spark/logs/spark-master.out \
SPARK_WORKER_LOG=/opt/spark/logs/spark-worker.out \
SPARK_WORKER_WEBUI_PORT=8080 \
SPARK_WORKER_PORT=7000 \
SPARK_MASTER="spark://spark-master:7077" \
SPARK_WORKLOAD="master"

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080 7077 6066&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_LOG_DIR&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="nb"&gt;touch&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_MASTER_LOG&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="nb"&gt;touch&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_WORKER_LOG&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /dev/stdout &lt;span class="nv"&gt;$SPARK_MASTER_LOG&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /dev/stdout &lt;span class="nv"&gt;$SPARK_WORKER_LOG&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; start-spark.sh /&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["/bin/bash", "/start-spark.sh"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that in the dockerfile we reference a script called &lt;strong&gt;start-spark.sh&lt;/strong&gt;, it's primary goal is to run spark-class script with the given role (master, or worker).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#start-spark.sh&lt;/span&gt;
&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"/opt/spark/bin/load-spark-env.sh"&lt;/span&gt;
&lt;span class="c"&gt;# When the spark work_load is master run class org.apache.spark.deploy.master.Master&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SPARK_WORKLOAD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"master"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;then

&lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SPARK_MASTER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;

&lt;span class="nb"&gt;cd&lt;/span&gt; /opt/spark/bin &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./spark-class org.apache.spark.deploy.master.Master &lt;span class="nt"&gt;--ip&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_MASTER_HOST&lt;/span&gt; &lt;span class="nt"&gt;--port&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_MASTER_PORT&lt;/span&gt; &lt;span class="nt"&gt;--webui-port&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_MASTER_WEBUI_PORT&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_MASTER_LOG&lt;/span&gt;

&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SPARK_WORKLOAD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"worker"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;then&lt;/span&gt;
&lt;span class="c"&gt;# When the spark work_load is worker run class org.apache.spark.deploy.master.Worker&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; /opt/spark/bin &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./spark-class org.apache.spark.deploy.worker.Worker &lt;span class="nt"&gt;--webui-port&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_WORKER_WEBUI_PORT&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_MASTER&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$SPARK_WORKER_LOG&lt;/span&gt;

&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SPARK_WORKLOAD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"submit"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"SPARK SUBMIT"&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Undefined Workload Type &lt;/span&gt;&lt;span class="nv"&gt;$SPARK_WORKLOAD&lt;/span&gt;&lt;span class="s2"&gt;, must specify: master, worker, submit"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build the image just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; cluster-apache-spark:3.0.2 &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After some time the image will be successfully created, it will take some time depending on how fast the dependencies and the spark tarball are dowloaded (fortunatelly these steps get cached as a layer thanks to the multistage setup).&lt;/p&gt;

&lt;h1&gt;
  
  
  The compose File
&lt;/h1&gt;

&lt;p&gt;Now that we have our apache-spark image is time to create a cluster in docker-compose&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.3"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;spark-master&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-apache-spark:3.0.2&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9090:8080"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;7077:7077"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./apps:/opt/spark-apps&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./data:/opt/spark-data&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_LOCAL_IP=spark-master&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_WORKLOAD=master&lt;/span&gt;
  &lt;span class="na"&gt;spark-worker-a&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-apache-spark:3.0.2&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9091:8080"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;7000:7000"&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;spark-master&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_MASTER=spark://spark-master:7077&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_WORKER_CORES=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_WORKER_MEMORY=1G&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_DRIVER_MEMORY=1G&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_EXECUTOR_MEMORY=1G&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_WORKLOAD=worker&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_LOCAL_IP=spark-worker-a&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./apps:/opt/spark-apps&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./data:/opt/spark-data&lt;/span&gt;
  &lt;span class="na"&gt;spark-worker-b&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-apache-spark:3.0.2&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9092:8080"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;7001:7000"&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;spark-master&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_MASTER=spark://spark-master:7077&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_WORKER_CORES=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_WORKER_MEMORY=1G&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_DRIVER_MEMORY=1G&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_EXECUTOR_MEMORY=1G&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_WORKLOAD=worker&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SPARK_LOCAL_IP=spark-worker-b&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./apps:/opt/spark-apps&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./data:/opt/spark-data&lt;/span&gt;
  &lt;span class="na"&gt;demo-database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:11.7-alpine&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5432:5432"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_PASSWORD=casa1234&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For both spark master and worker we configured the following environment variables:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SPARK_MASTER&lt;/td&gt;
&lt;td&gt;Spark master url&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPARK_WORKER_CORES&lt;/td&gt;
&lt;td&gt;Number of cpu cores allocated for the worker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPARK_WORKER_MEMORY&lt;/td&gt;
&lt;td&gt;Amount of ram allocated for the worker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPARK_DRIVER_MEMORY&lt;/td&gt;
&lt;td&gt;Amount of ram allocated for the driver programs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPARK_EXECUTOR_MEMORY&lt;/td&gt;
&lt;td&gt;Amount of ram allocated for the executor programs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPARK_WORKLOAD&lt;/td&gt;
&lt;td&gt;The spark workload to run(can be any of &lt;strong&gt;master&lt;/strong&gt;, &lt;strong&gt;worker&lt;/strong&gt;, &lt;strong&gt;submit&lt;/strong&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Compared to 2018 version the following changes were made:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Removed the custom network and ip addresses&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expose 2 workers instead of 3, and expose each worker port in the range of(9090...9091 and so on)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pyspark support thanks to community contributions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Include a postgresql instance to run the demos(both demos store data in jdbc)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final step to create your test cluster will be to run the compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To validate your cluster just access the spark UI on each worker &amp;amp; master URL&lt;/p&gt;

&lt;p&gt;Spark Master: &lt;a href="http://localhost:9090" rel="noopener noreferrer"&gt;http://localhost:9090&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgg3fmfhj3s2iyc5whwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgg3fmfhj3s2iyc5whwd.png" title="Spark Master UI" alt="Alt Text" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Spark Worker 1: &lt;a href="http://localhost:9091" rel="noopener noreferrer"&gt;http://localhost:9091&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7par1m5bugmnjv1hadfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7par1m5bugmnjv1hadfw.png" title="Spark worker 1" alt="Alt Text" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Spark Worker 2: &lt;a href="http://localhost:9092" rel="noopener noreferrer"&gt;http://localhost:9092&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkjk79ki3neiesh3m0p8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkjk79ki3neiesh3m0p8.png" title="Spark worker 2" alt="Alt Text" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Database Server&lt;/p&gt;

&lt;p&gt;To check database server just use the psql command(or any database client of your choice):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;psql &lt;span class="nt"&gt;-U&lt;/span&gt; postgres &lt;span class="nt"&gt;-h&lt;/span&gt; 0.0.0.0 &lt;span class="nt"&gt;-p&lt;/span&gt; 5432
&lt;span class="c"&gt;#It will ask for your password defined in the compose file&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  The Demo Apps
&lt;/h1&gt;

&lt;p&gt;The following apps can be found in apps directory, this apps are used as proof of concept of our cluster behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  NY Bus Stops Data [Pyspark]
&lt;/h2&gt;

&lt;p&gt;This programs just loads archived data from &lt;a href="http://web.mta.info/developers/MTA-Bus-Time-historical-data.html" rel="noopener noreferrer"&gt;MTA Bus Time&lt;/a&gt; and apply basic filters using spark sql, the result are persisted into a postgresql table.&lt;/p&gt;

&lt;p&gt;The loaded table will contain the following structure:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;latitude&lt;/th&gt;
&lt;th&gt;longitude&lt;/th&gt;
&lt;th&gt;time_received&lt;/th&gt;
&lt;th&gt;vehicle_id&lt;/th&gt;
&lt;th&gt;distance_along_trip&lt;/th&gt;
&lt;th&gt;inferred_direction_id&lt;/th&gt;
&lt;th&gt;inferred_phase&lt;/th&gt;
&lt;th&gt;inferred_route_id&lt;/th&gt;
&lt;th&gt;inferred_trip_id&lt;/th&gt;
&lt;th&gt;next_scheduled_stop_distance&lt;/th&gt;
&lt;th&gt;next_scheduled_stop_id&lt;/th&gt;
&lt;th&gt;report_hour&lt;/th&gt;
&lt;th&gt;report_date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;40.668602&lt;/td&gt;
&lt;td&gt;-73.986697&lt;/td&gt;
&lt;td&gt;2014-08-01 04:00:01&lt;/td&gt;
&lt;td&gt;469&lt;/td&gt;
&lt;td&gt;4135.34710710144&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;IN_PROGRESS&lt;/td&gt;
&lt;td&gt;MTA NYCT_B63&lt;/td&gt;
&lt;td&gt;MTA NYCT_JG_C4-Weekday-141500_B63_123&lt;/td&gt;
&lt;td&gt;2.63183804205619&lt;/td&gt;
&lt;td&gt;MTA_305423&lt;/td&gt;
&lt;td&gt;2014-08-01 04:00:00&lt;/td&gt;
&lt;td&gt;2014-08-01&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To submit the app connect to one of the workers or the master and execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/opt/spark/bin/spark-submit &lt;span class="nt"&gt;--master&lt;/span&gt; spark://spark-master:7077 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--jars&lt;/span&gt; /opt/spark-apps/postgresql-42.2.22.jar &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--driver-memory&lt;/span&gt; 1G &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--executor-memory&lt;/span&gt; 1G &lt;span class="se"&gt;\&lt;/span&gt;
/opt/spark-apps/main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v8qjmooqj6b8161ivau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v8qjmooqj6b8161ivau.png" title="Spark UI with pyspark program running" alt="Alt Text" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  MTA Bus Analytics[Scala]
&lt;/h2&gt;

&lt;p&gt;This program takes the archived data from &lt;a href="http://web.mta.info/developers/MTA-Bus-Time-historical-data.html" rel="noopener noreferrer"&gt;MTA Bus Time&lt;/a&gt; and make some aggregations on it, the calculated results are persisted on postgresql tables.&lt;/p&gt;

&lt;p&gt;Each persisted table correspond to a particullar aggregation:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Table&lt;/th&gt;
&lt;th&gt;Aggregation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;day_summary&lt;/td&gt;
&lt;td&gt;A summary of vehicles reporting, stops visited, average speed and distance traveled(all vehicles)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;speed_excesses&lt;/td&gt;
&lt;td&gt;Speed excesses calculated in a 5 minute window&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;average_speed&lt;/td&gt;
&lt;td&gt;Average speed by vehicle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;distance_traveled&lt;/td&gt;
&lt;td&gt;Total Distance traveled by vehicle&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To submit the app connect to one of the workers or the master and execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/opt/spark/bin/spark-submit &lt;span class="nt"&gt;--deploy-mode&lt;/span&gt; cluster &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--master&lt;/span&gt; spark://spark-master:7077 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--total-executor-cores&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--class&lt;/span&gt; mta.processing.MTAStatisticsApp &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--driver-memory&lt;/span&gt; 1G &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--executor-memory&lt;/span&gt; 1G &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--jars&lt;/span&gt; /opt/spark-apps/postgresql-42.2.22.jar &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--conf&lt;/span&gt; spark.driver.extraJavaOptions&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-Dconfig-path=/opt/spark-apps/mta.conf'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--conf&lt;/span&gt; spark.executor.extraJavaOptions&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-Dconfig-path=/opt/spark-apps/mta.conf'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
/opt/spark-apps/mta-processing.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will notice on the spark-ui a driver program and executor program running(In scala we can use deploy-mode cluster)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5oavrm9y5okxj2cva86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5oavrm9y5okxj2cva86.png" title="Spark UI with scala program running" alt="Alt Text" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusions
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We've created a simpler version of a spark cluster in docker-compose, the main goal of this cluster is to provide you with a local environment to test the distributed nature of your spark apps without making any deploy to a production cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The generated image isn't designed to have a small footprint(Image size is about 1gb).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This cluster is only necessary when you want to run a spark app in a distributed environment in your machine(Production use is discouraged, use databricks or kuberetes setup instead).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What's left to do?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Right now to run applications in deploy-mode cluster is necessary to specify arbitrary driver port through &lt;strong&gt;spark.driver.port&lt;/strong&gt; configuration (I must fix some networking and port issues).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The spark submit entry in the start-spark.sh is unimplemented, the submit used in the demos can be triggered from any worker.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>spark</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>Deploy a Serverless Api on Cloud Run with Github Actions</title>
      <dc:creator>Marco Villarreal</dc:creator>
      <pubDate>Mon, 21 Jun 2021 23:36:53 +0000</pubDate>
      <link>https://dev.to/mvillarrealb/deploy-a-serverless-api-on-cloud-run-with-github-actions-3ejo</link>
      <guid>https://dev.to/mvillarrealb/deploy-a-serverless-api-on-cloud-run-with-github-actions-3ejo</guid>
      <description>&lt;p&gt;Google cloud Run is a serverless runtime designed for containerized applications, it allows to run high availability services with few configurations.&lt;/p&gt;

&lt;p&gt;In this post we will deploy a Go Rest Api on Cloud Run using Github Actions as our CI/CD Tool.&lt;/p&gt;

&lt;h1&gt;
  
  
  Requirements
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Google cloud cli&lt;/li&gt;
&lt;li&gt;Terraform - I am using v1.0.0&lt;/li&gt;
&lt;li&gt;Docker - I am using 20.10.7&lt;/li&gt;
&lt;li&gt;Go - I am using go1.16.4&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mvillarrealb/poi-api" rel="noopener noreferrer"&gt;This Repo :)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Configuring Google Cloud Project
&lt;/h1&gt;

&lt;p&gt;We need the following steps to setup a GCP project with cloud Run enabled and cloud SQL database instance prepared.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;PROJECT_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mvillarreal-demo-platform"&lt;/span&gt;
&lt;span class="c"&gt;# Create GCP project(You must enable a billing account in your project)&lt;/span&gt;
gcloud projects create &lt;span class="nv"&gt;$PROJECT_NAME&lt;/span&gt;

&lt;span class="c"&gt;# Set project as current running project&lt;/span&gt;
gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project &lt;span class="nv"&gt;$PROJECT_NAME&lt;/span&gt;

&lt;span class="c"&gt;# View the current project&lt;/span&gt;
gcloud config get-value project

&lt;span class="c"&gt;# Enable cloud Run api&lt;/span&gt;
gcloud services &lt;span class="nb"&gt;enable &lt;/span&gt;run.googleapis.com

&lt;span class="c"&gt;# Enable resource manager api&lt;/span&gt;
gcloud services &lt;span class="nb"&gt;enable &lt;/span&gt;cloudresourcemanager.googleapis.com

&lt;span class="c"&gt;# Enable vpc access api&lt;/span&gt;
gcloud services &lt;span class="nb"&gt;enable &lt;/span&gt;vpcaccess.googleapis.com

&lt;span class="c"&gt;# Enable compute engine(for serverless vpc access)&lt;/span&gt;
gcloud services &lt;span class="nb"&gt;enable &lt;/span&gt;compute.googleapis.com

&lt;span class="c"&gt;# Enable container Registry&lt;/span&gt;
gcloud services &lt;span class="nb"&gt;enable &lt;/span&gt;containerregistry.googleapis.com

&lt;span class="c"&gt;# Enable Cloud SQL services&lt;/span&gt;
gcloud services &lt;span class="nb"&gt;enable &lt;/span&gt;sqladmin.googleapis.com

&lt;span class="c"&gt;# Enable networking services&lt;/span&gt;
gcloud services &lt;span class="nb"&gt;enable &lt;/span&gt;servicenetworking.googleapis.com

&lt;span class="c"&gt;# Service account for Github actions&lt;/span&gt;
gcloud iam service-accounts create mvillarrealb-gha-saccount &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"Main service account for github actions"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--display-name&lt;/span&gt; &lt;span class="s2"&gt;"mvillarreal-gha-saccount"&lt;/span&gt;


&lt;span class="c"&gt;# Assign editor role for service account(for terraform)&lt;/span&gt;
gcloud projects add-iam-policy-binding &lt;span class="nv"&gt;$PROJECT_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--member&lt;/span&gt; serviceAccount:mvillarrealb-gha-saccount@&lt;span class="nv"&gt;$PROJECT_NAME&lt;/span&gt;.iam.gserviceaccount.com &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--role&lt;/span&gt; roles/editor 

&lt;span class="c"&gt;# Adding networking admin permission&lt;/span&gt;
gcloud projects add-iam-policy-binding &lt;span class="nv"&gt;$PROJECT_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--member&lt;/span&gt; serviceAccount:mvillarrealb-gha-saccount@&lt;span class="nv"&gt;$PROJECT_NAME&lt;/span&gt;.iam.gserviceaccount.com &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--role&lt;/span&gt; roles/servicenetworking.networksAdmin

&lt;span class="c"&gt;# Export service account key for terraform(keep this in a safe place)&lt;/span&gt;
gcloud iam service-accounts keys create &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/terraform/service-account-key.json &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--iam-account&lt;/span&gt; mvillarrealb-gha-saccount@&lt;span class="nv"&gt;$PROJECT_NAME&lt;/span&gt;.iam.gserviceaccount.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Setup Terraform
&lt;/h1&gt;

&lt;p&gt;Our Cloud Run service uses a &lt;strong&gt;database&lt;/strong&gt; and is deployed inside a &lt;strong&gt;private network&lt;/strong&gt;, in order to create these additional resources we will use Terraform, to do so execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Initialize terraform dependencies&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;terraform &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; terraform init

&lt;span class="c"&gt;# Preview terraform plan&lt;/span&gt;
terraform plan

&lt;span class="c"&gt;# Apply Terraform(it will take about 10 minutes, cloud sql instance take some time)&lt;/span&gt;
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Post Terraform Tasks
&lt;/h1&gt;

&lt;p&gt;After the installation some additional steps are required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create database &lt;strong&gt;poi_manager&lt;/strong&gt; in your cloud SQL instance, you can do it directly on Google Cloud Console or via Cloud SQL proxy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate a new password for the postgres User and copy it into some safe place(we will need it later)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Prepare Dockerfile
&lt;/h1&gt;

&lt;p&gt;To create a containerized version of our api, we will use docker multi stage build and take advantage of Go static compilation and create a lightweight image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;
&lt;span class="c"&gt;#Build step&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;golang:1.15&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /poi-api/api
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /poi-api&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; api ./api&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; go.mod go.sum main.go ./&lt;/span&gt;
&lt;span class="c"&gt;#static compilation options for go&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go build &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s2"&gt;"-linkmode external -extldflags -static"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; main .

&lt;span class="c"&gt;#Run step&lt;/span&gt;
&lt;span class="c"&gt;#Scratch image is an empty image to add our binary, so the image will be as small as possible&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; scratch&lt;/span&gt;
&lt;span class="c"&gt;#Environments for dataase connection&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DATABASE_HOST="127.0.0.1" \&lt;/span&gt;
DATABASE_PORT="5432" \
DATABASE_USERNAME="postgres" \
DATABASE_PASSWORD="password"
#Copy binary from builder
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /poi-api/main ./main&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["./main"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  Cloud Run Deploy with Github Actions
&lt;/h1&gt;

&lt;p&gt;Finally to deploy the api in cloud run we will create a pipeline using Github actions, our pipeline structure is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;poi-api&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east1&lt;/span&gt; &lt;span class="c1"&gt;# Cloud Run zone&lt;/span&gt;
  &lt;span class="na"&gt;PROJECT_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mvillarreal-demo-platform&lt;/span&gt; &lt;span class="c1"&gt;# GCP project&lt;/span&gt;
  &lt;span class="na"&gt;BASE_IMAGE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/mvillarreal-demo-platform/poi-api&lt;/span&gt; &lt;span class="c1"&gt;#Container registry entry for the api&lt;/span&gt;
  &lt;span class="na"&gt;DATABASE_INSTANCE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mvillarreal-pg-sql&lt;/span&gt; &lt;span class="c1"&gt;# Cloud sql instance name&lt;/span&gt;
  &lt;span class="na"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;poi-api&lt;/span&gt; &lt;span class="c1"&gt;#Cloud run service name&lt;/span&gt;
  &lt;span class="na"&gt;DATABASE_IP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.85.0.3&lt;/span&gt; &lt;span class="c1"&gt;# My database private IP address&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Definition for Build Job&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Definition for deploy Job&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Definition for Test Job&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we must define each job:&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Job
&lt;/h2&gt;

&lt;p&gt;In the build stage we will use our dockerfile to build and push to gcr the created image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Project&lt;/span&gt; &lt;span class="c1"&gt;# Setup&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@master&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to GCR&lt;/span&gt; &lt;span class="c1"&gt;# Login to GCP&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_json_key&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GCR_JSON_KEY }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build &amp;amp; Publish Image&lt;/span&gt; &lt;span class="c1"&gt;# Use the dockerfile to publish image&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v2&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
          &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.BASE_IMAGE }}:${{ github.sha }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy Job
&lt;/h2&gt;

&lt;p&gt;The deploy stage will use the deploy-cloudrun action to create a successful cloud run api&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to Cloud Run&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;google-github-actions/deploy-cloudrun@main&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.REGION }}&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.SERVICE_NAME }}&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.BASE_IMAGE }}:${{ github.sha }}&lt;/span&gt;
          &lt;span class="na"&gt;credentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GCP_SA_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;env_vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DATABASE_HOST=${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;env.DATABASE_IP&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}},DATABASE_USERNAME=${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;secrets.DATABASE_USERNAME&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}},DATABASE_PASSWORD=${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;secrets.DATABASE_PASSWORD&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;flags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--allow-unauthenticated&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--vpc-connector&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vpc-conn&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--add-cloudsql-instances&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;env.PROJECT_ID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:${{env.REGION}}:${{env.DATABASE_INSTANCE}}'"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's take a closer look on some settings we've added in here:&lt;/p&gt;

&lt;h3&gt;
  
  
  env_vars
&lt;/h3&gt;

&lt;p&gt;Environment variables set on cloud run application in the format env=value,env2=value2, the variables assigned were:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DATABASE_HOST&lt;/td&gt;
&lt;td&gt;Database Host, in this case the private ip specified in the env section&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DATABASE_USERNAME&lt;/td&gt;
&lt;td&gt;Database Username loaded in the secrets of our repo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DATABASE_PASSWORD&lt;/td&gt;
&lt;td&gt;Database Password loaded in the secrets of our repo&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  flags
&lt;/h3&gt;

&lt;p&gt;Cloud run specific settings used to configure the api:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setting&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;--allow-unauthenticated&lt;/td&gt;
&lt;td&gt;Allow requests from unauthenticated users&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;--vpc-connector vpc-conn&lt;/td&gt;
&lt;td&gt;Specifiy a serverless vpc connector &lt;strong&gt;vpc-conn&lt;/strong&gt; was the connector we created on the terraform file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;--add-cloudsql-instances 'env.PROJECT_ID:env.REGION:env.DATABASE_INSTANCE'&lt;/td&gt;
&lt;td&gt;Links the cloud sql instance to the api to be able to use it&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Test Job
&lt;/h2&gt;

&lt;p&gt;Last but not least we will run a postman collection to test our deployed service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Generate Variable File&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;echo {\"BASE_URL\": \"{{ needs.deploy.outputs.url }}\"} &amp;gt; variables.json&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run e2e Test&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;matt-ball/newman-action@master&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;collection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;poi-e2e.postman_collection.json&lt;/span&gt;
          &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;variables.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before pushing your changes, we must configure some secrets under the Settings &amp;gt; Secrets in our repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jf2svztdmhivnvklpjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jf2svztdmhivnvklpjt.png" title="Secrets required to run the pipeline" alt="Alt Text" width="529" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The secrets we've added are the following:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Secret&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DATABASE_USERNAME&lt;/td&gt;
&lt;td&gt;Cloud SQL database username&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DATABASE_PASSWORD&lt;/td&gt;
&lt;td&gt;Cloud SQL database password(we adquired this in previous steps)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP_SA_KEY&lt;/td&gt;
&lt;td&gt;Base 64 encoded service account key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCR_JSON_KEY&lt;/td&gt;
&lt;td&gt;Json format service account key&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;After a push into the master branch a job will be triggered deploying our service to cloud run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp7dx160xnpgyl1gb93h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp7dx160xnpgyl1gb93h.png" title="Pipeline successful run" alt="Alt Text" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Post Installation Steps
&lt;/h1&gt;

&lt;p&gt;If you want to use the geocoding endpoint and reference the points of interest you can load the sql file in GCP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a bucket to load initial data&lt;/span&gt;
gsutil mb gs://h3-indexes

&lt;span class="c"&gt;# Upload files&lt;/span&gt;
gsutil &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/data/&lt;span class="k"&gt;*&lt;/span&gt;.sql gs://h3-indexes

&lt;span class="c"&gt;# Load data(Remember to add ACL permissions to the h3-indexes directory)&lt;/span&gt;
gcloud sql instances import mvillarreal-pg-sql gs://h3-indexes/PE-Lima.sql &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--database&lt;/span&gt; poi_manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  What Have We done so far?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Setup a Google Cloud project&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provision basic services with Terraform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dockerize a Go Rest Api using multi stage builds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Github Actions to enable a CI/CD pipeline&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What's Left to do?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Add a custom Domain for your Cloud Run api&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disable public access for your api&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add some unit test on the codebase&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclussions
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cloud Run can be a good option if you are starting a serverless based architecture, or can be an  intermediary step if you are thinking on kubernetes on a long term(both solutions are container based).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Github actions is a robust CI/CD tool with strong community based actions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using golang you can reduce the footprint of your end product using static compilation :)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/sql/docs/mysql/connect-run" rel="noopener noreferrer"&gt;Connecting from Cloud Run to Cloud SQL&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/google-github-actions/deploy-cloudrun" rel="noopener noreferrer"&gt;google-github-actions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://devopsdirective.com/posts/2021/04/tiny-container-image/" rel="noopener noreferrer"&gt;Tiny Container Images&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/vpc/docs/configure-serverless-vpc-access" rel="noopener noreferrer"&gt;ConfiguringServerless VPC Access&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googlecloud</category>
      <category>go</category>
      <category>docker</category>
    </item>
    <item>
      <title>E2E Sandbox with Test-containers</title>
      <dc:creator>Marco Villarreal</dc:creator>
      <pubDate>Thu, 10 Jun 2021 21:57:29 +0000</pubDate>
      <link>https://dev.to/mvillarrealb/e2e-sandbox-with-test-containers-1b01</link>
      <guid>https://dev.to/mvillarrealb/e2e-sandbox-with-test-containers-1b01</guid>
      <description>&lt;p&gt;End to End tests can be cumbersome the mayority of the times, we need to wire up a lot of services (cache servers, database engines, message brokers etc) resulting in a big ball of stuff to do in order to run a simple test.&lt;/p&gt;

&lt;p&gt;These tests are the most expensive ones in the &lt;a href="https://martinfowler.com/articles/practical-test-pyramid.html#:~:text=The%20%22Test%20Pyramid%22%20is%20a,put%20it%20into%20practice%20properly." rel="noopener noreferrer"&gt;test pyramid&lt;/a&gt;, but are the ones closer to a real production scenario. &lt;/p&gt;

&lt;p&gt;While unit tests ensures our business logic is ok, e2e tests ensures that our platform will perform accordingly with no surprises.&lt;/p&gt;

&lt;p&gt;In this article we are going to put hands up with testcontainers to create an e2e sandbox and run some &lt;strong&gt;postman assertions&lt;/strong&gt; through &lt;strong&gt;newman&lt;/strong&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is TestContainers?
&lt;/h1&gt;

&lt;p&gt;Tescontainers is a java library to run containers in JUnit tests, it provides lightweight containers environment to create infraestructure service and test real case scenarios(Database connections, Cache servers etc.).&lt;/p&gt;

&lt;p&gt;While the intended use of testcontainers is to run containers in the context of a Junit test, we will use it to create a sandbox environment outside junit environment.&lt;/p&gt;

&lt;h1&gt;
  
  
  Tools Needed
&lt;/h1&gt;

&lt;p&gt;The following tools are needed to run this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java (I am using openjdk 11.0.11 2021-04-20)&lt;/li&gt;
&lt;li&gt;Docker (I am using Docker version 20.10.7, build f0df350)&lt;/li&gt;
&lt;li&gt;Postman collection(Attached in the repo)&lt;/li&gt;
&lt;li&gt;Newman (I am using 5.2.3) + newman-html-reporter&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mvillarrealb/liquibase-demo/tree/master/book-demo" rel="noopener noreferrer"&gt;This repo :D&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Our Service
&lt;/h1&gt;

&lt;p&gt;Our service it is a continuation of &lt;a href="https://dev.to/mvillarrealb/database-migrations-for-micronaut-spring-with-liquibase-539a"&gt;my previous post on database migration&lt;/a&gt;. TLDR; we are working on a really simple book registry api.&lt;/p&gt;

&lt;p&gt;However as you can see there are several infraestructure services to provide a fully featured service:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F090ws4kthj89p1dli0zl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F090ws4kthj89p1dli0zl.jpg" title="Book Service Architecture" alt="Alt Text" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The services we need to provide in the sandbox are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Pub/sub&lt;/strong&gt;: To publish Events regarding the Book registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Cloud Storage&lt;/strong&gt;: To upload book covers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PostgreSQL&lt;/strong&gt;: To store books and author data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h1&gt;
  
  
  Creating The Spring Profile
&lt;/h1&gt;

&lt;p&gt;To create the sadbox environment we will take advantage of the &lt;strong&gt;spring profiles&lt;/strong&gt; and &lt;strong&gt;conditional beans&lt;/strong&gt;, to achieve that, the following steps are required:&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure testcontainers dependencies
&lt;/h2&gt;

&lt;p&gt;We will add testcontainers gcp module and of course testcontaiers for postgresql&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;    &lt;span class="n"&gt;implementation&lt;/span&gt; &lt;span class="s2"&gt;"org.testcontainers:gcloud:1.15.3"&lt;/span&gt;
    &lt;span class="n"&gt;runtimeOnly&lt;/span&gt; &lt;span class="s1"&gt;'org.testcontainers:postgresql'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create Profile application-test.yaml
&lt;/h2&gt;

&lt;p&gt;Apart from our application-default.yaml, we will add an additional yaml file for our test profile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--resources
|   +--db
|   +--application-default.yaml
|   +--application-test.yaml #Sandbox configuration file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The content of application-test.yaml looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;profiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt; &lt;span class="c1"&gt;# Define the applied profile&lt;/span&gt;
&lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;root&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WARN&lt;/span&gt;
    &lt;span class="na"&gt;org.springframework.web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WARN&lt;/span&gt;
    &lt;span class="na"&gt;org.mvillabe.books&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INFO&lt;/span&gt;
    &lt;span class="na"&gt;org.hibernate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WARN&lt;/span&gt;
&lt;span class="na"&gt;spring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;liquibase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;change-log&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;classpath:/db/changelog.yaml"&lt;/span&gt;
  &lt;span class="na"&gt;datasource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;jdbc:tc:postgresql:11.7-alpine:///book_demo"&lt;/span&gt; &lt;span class="c1"&gt;#Testcontainers jdbc URI format&lt;/span&gt;
    &lt;span class="na"&gt;driverClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;org.testcontainers.jdbc.ContainerDatabaseDriver"&lt;/span&gt; &lt;span class="c1"&gt;#Tescontainers jdbc driver to hook automatically&lt;/span&gt;
  &lt;span class="na"&gt;cloud&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;gcp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;project-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-containers"&lt;/span&gt;
      &lt;span class="na"&gt;credentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Dummy encoded key&lt;/span&gt;
        &lt;span class="na"&gt;encoded-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${BASE64_ENCODED_DUMMY_SVC_ACCOUNT}"&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
        &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="c1"&gt;# Disable default bean wiring for storage&lt;/span&gt;
      &lt;span class="na"&gt;pubsub&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;emulator-host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${EMULATOR_HOST:127.0.0.1:8085}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring Sandbox beans
&lt;/h2&gt;

&lt;p&gt;To configure the sandbox beans we will use the following class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Profile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"test"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;//This configuration bean will only be available on test profile&lt;/span&gt;
&lt;span class="nd"&gt;@Configuration&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BeanConfiguration&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="cm"&gt;/*Your bean definition*/&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each service configuration must be added to &lt;strong&gt;BeanConfiguration&lt;/strong&gt; class:&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Postgresql
&lt;/h3&gt;

&lt;p&gt;Automatically wired with testcontainers jdbcDriver(org.testcontainers.jdbc.ContainerDatabaseDriver) - Sweet!&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Cloud Storage
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Disable default spring cloud wiring with property &lt;strong&gt;spring.cloud.gcp.storage.enabled = false&lt;/strong&gt; &amp;lt;- we did this in our application-test.yaml&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define a &lt;strong&gt;StorageEmulator&lt;/strong&gt; container:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;StorageEmulator&lt;/span&gt; &lt;span class="nf"&gt;storageEmulator&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;StorageEmulator&lt;/span&gt; &lt;span class="n"&gt;storageEmulator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StorageEmulator&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="n"&gt;storageEmulator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;storageEmulator&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Manually register &lt;strong&gt;Storage&lt;/strong&gt; Bean:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="cm"&gt;/**
    As you can see Storage bean uses as dependency the previously created StorageEmulator bean, this is because
    we need to wire up the emulator first and provide an endpoint for cloud storage to work it
    */&lt;/span&gt;
    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="nd"&gt;@Primary&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Storage&lt;/span&gt; &lt;span class="nf"&gt;storage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;StorageEmulator&lt;/span&gt; &lt;span class="n"&gt;emulator&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;GcpProjectIdProvider&lt;/span&gt; &lt;span class="n"&gt;gcpProjectIdProvider&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;StorageOptions&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;newBuilder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProjectId&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gcpProjectIdProvider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getProjectId&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setHost&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;emulator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getEmulatorEndpoint&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getService&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuring Pub/Sub
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Define a &lt;strong&gt;PubSubEmulatorContainer&lt;/strong&gt; container:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Bean&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"pubSubEmulatorContainer"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;PubSubEmulatorContainer&lt;/span&gt; &lt;span class="nf"&gt;pubSubEmulatorContainer&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;PubSubEmulatorContainer&lt;/span&gt; &lt;span class="n"&gt;container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PubSubEmulatorContainer&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="nc"&gt;DockerImageName&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;parse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"gcr.io/google.com/cloudsdktool/cloud-sdk:316.0.0-emulators"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Override bean &lt;strong&gt;GcpPubSubProperties&lt;/strong&gt; (Spring Cloud &lt;strong&gt;AutoConfiguration&lt;/strong&gt; Bean) to reload &lt;strong&gt;EMULATOR_HOST&lt;/strong&gt; environment variable with the emulator Host
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="cm"&gt;/*
    * As we did with Storage, the Pubsub must be wired up with the emulator to provide the container host:port,
    * in this case we are overriding de auto congiguration bean initialized before PubSubTemplate
    */&lt;/span&gt;
    &lt;span class="nd"&gt;@Primary&lt;/span&gt;&lt;span class="cm"&gt;/*Override default bean*/&lt;/span&gt;
    &lt;span class="nd"&gt;@ConfigurationProperties&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"spring.cloud.gcp.pubsub"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="cm"&gt;/*Load pubsub properties*/&lt;/span&gt;
    &lt;span class="nd"&gt;@Autowired&lt;/span&gt;
    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;GcpPubSubProperties&lt;/span&gt; &lt;span class="nf"&gt;configurationProperties&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;PubSubEmulatorContainer&lt;/span&gt; &lt;span class="n"&gt;pubSubEmulatorContainer&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"EMULATOR_HOST"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pubSubEmulatorContainer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getEmulatorEndpoint&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;GcpPubSubProperties&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With all these settings we are ready to run our application via command line or IntellijIdea(just consider adding an environment variable with: SPRING_PROFILES_ACTIVE=test )&lt;/p&gt;

&lt;h1&gt;
  
  
  Running the application
&lt;/h1&gt;

&lt;p&gt;Run your application with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SPRING_PROFILES_ACTIVE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./gradlew bootRun
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything started ok, then you will see some basic logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3lyjv815hgtk9tw882f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3lyjv815hgtk9tw882f.png" title="Springboot application running up" alt="Alt Text" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you check your docker containers you will notice some activity:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tj5u99f25302cvknzon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tj5u99f25302cvknzon.png" title="Containers created by the testcontainers runtime" alt="Alt Text" width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each individual container represents your app services, test containers will also create an additional container representing your app.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing with Postman/Newman
&lt;/h1&gt;

&lt;p&gt;The whole purposse of all this configuration is to run our quality assurance team postman collection with newman, so let's do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;newman run books-e2e.postman_collection.json &lt;span class="nt"&gt;-r&lt;/span&gt; html

&lt;span class="c"&gt;#if you have a custom html reporter&lt;/span&gt;
newman run books-e2e.postman_collection.json &lt;span class="nt"&gt;-r&lt;/span&gt; html &lt;span class="nt"&gt;--reporter-html-template&lt;/span&gt; htmlreqres.hbs 

&lt;span class="c"&gt;# To export test results in junit format(Good for CI/CD tools like Azure Devops)&lt;/span&gt;
newman run books-e2e.postman_collection.json &lt;span class="nt"&gt;-r&lt;/span&gt; junit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's it, we successfully runned a E2E test on a sandbox environment.&lt;/p&gt;

&lt;h1&gt;
  
  
  Caveats and Conclusions
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The idea of a sandbox environment is to run stuff as isolated and reproducible as posible to provide better feedback on how things work on the intended infraestructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A sandbox is also nice for developers who does not want to have a lot of containers running all the time to test a microservice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are going to use a sandbox environment for your microservices architecture, consider isolating a dependency with the bean configuration boilerplate(of course adapted to your stack :D).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why not use JUNIT instead&lt;/strong&gt;: As I said before, testcontainers was designed to run smoothly in junit environments, is it's natural behavior, however when we are working on a team with quality Assurance professionals, they are more familiar with tools like postman, newman, gatling etc. They are not interested on creating java code to assert basic e2e behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wiring up a successful sandbox can be difficult, and a little bit tricky, in my case I've found overriding some beans really challenging (you must know what you are touching and why, and of course ensure your production code doesn't get polluted by this beans/configurations).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a bottom note I encourage you to play with testcontainers outside Junit with your own stack and discover what works for you.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>microservices</category>
      <category>java</category>
      <category>docker</category>
    </item>
    <item>
      <title>Database Migrations for Micronaut/Spring With Liquibase</title>
      <dc:creator>Marco Villarreal</dc:creator>
      <pubDate>Sun, 30 May 2021 22:44:55 +0000</pubDate>
      <link>https://dev.to/mvillarrealb/database-migrations-for-micronaut-spring-with-liquibase-539a</link>
      <guid>https://dev.to/mvillarrealb/database-migrations-for-micronaut-spring-with-liquibase-539a</guid>
      <description>&lt;p&gt;Wheter you are starting on a new project or mantaining an existing codebase, mantaining database changes as controlled and reproducible as possible is a &lt;strong&gt;must&lt;/strong&gt;, there are a varieity of tools for achieving this goal but today we going to focus on &lt;a href="https://www.liquibase.org/" rel="noopener noreferrer"&gt;liquibase&lt;/a&gt;, an opensource(and paid) database migration/versioning tool.&lt;/p&gt;

&lt;h1&gt;
  
  
  Database Migration 101
&lt;/h1&gt;

&lt;p&gt;A database migration(or schema migration) is a software engineering technique based on a structured and incremental version control of a database schema, can be compared to a git repository, where you can perform incremental commits adding new functionality to a codebase.&lt;/p&gt;

&lt;p&gt;Given that in mind we are ensuring that every version(or commit) to our database can be traced and reproduced in different environments(Local development, UAT, Testing Sandbox).&lt;/p&gt;

&lt;h1&gt;
  
  
  Liquibase In a Nutshell
&lt;/h1&gt;

&lt;p&gt;In liquibase database migrations are structured in a &lt;strong&gt;changelog&lt;/strong&gt; a file to track every version of your database, usually a changelog contains relevant information about every version such as: version number, comments, author and of course the changes themselves.&lt;/p&gt;

&lt;p&gt;Liquibase enables a variety of formats for your changelog(sql, xml, properties and yaml files) in this case and since we are using spring and micronaut, we are going to create yaml files as our changelog.&lt;/p&gt;

&lt;p&gt;The following example is a changelog(yaml based) with sqlFiles:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;databaseChangeLog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;changeSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v_1_0_0&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Marco&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Villarreal"&lt;/span&gt;
      &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;A&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;comment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;version"&lt;/span&gt;
      &lt;span class="na"&gt;sqlFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;encoding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;utf8&lt;/span&gt;
        &lt;span class="na"&gt;relativeToChangelogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;stripComments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v_1_0_0/main-changelog.sql"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a closer look we can identify the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;comment&lt;/strong&gt;: Description for your database version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;author&lt;/strong&gt;: Author of the current version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;id&lt;/strong&gt;: The id of the version, you are totally free to use whatever format you like&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sqlFile&lt;/strong&gt;: An object with the sql file configuration

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;encoding&lt;/strong&gt;: Encoding of the sql file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;relativeToChangelogFile&lt;/strong&gt;: Determines if the sql file path is relative to the changelog path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;stripComments&lt;/strong&gt;: Remove any comments from the sql file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;path&lt;/strong&gt;: The path for the sql file itself&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;With this in mind we can use the following directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--db
|   +--changelog.yaml
|   +--v_1_0_0
|   |   +--main-changelog.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And of course we need an initial database version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;--main-changelog.sql&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;author_id&lt;/span&gt; &lt;span class="n"&gt;BIGSERIAL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;author_name&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;WITHOUT&lt;/span&gt; &lt;span class="nb"&gt;TIME&lt;/span&gt; &lt;span class="k"&gt;ZONE&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="n"&gt;updated_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;WITHOUT&lt;/span&gt; &lt;span class="nb"&gt;TIME&lt;/span&gt; &lt;span class="k"&gt;ZONE&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;author_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;book&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;book_id&lt;/span&gt; &lt;span class="n"&gt;BIGSERIAL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;author_id&lt;/span&gt; &lt;span class="nb"&gt;BIGINT&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;book_isbn&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;book_name&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;WITHOUT&lt;/span&gt; &lt;span class="nb"&gt;TIME&lt;/span&gt; &lt;span class="k"&gt;ZONE&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="n"&gt;updated_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;WITHOUT&lt;/span&gt; &lt;span class="nb"&gt;TIME&lt;/span&gt; &lt;span class="k"&gt;ZONE&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;NOW&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;book_id&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="k"&gt;FOREIGN&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;author_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;REFERENCES&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;author_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="k"&gt;CASCADE&lt;/span&gt;
  &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;DELETE&lt;/span&gt; &lt;span class="k"&gt;RESTRICT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;CONSTRAINT&lt;/span&gt; &lt;span class="n"&gt;unq_book_isbn&lt;/span&gt; &lt;span class="k"&gt;UNIQUE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;book_isbn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: We are not using a CREATE DATABASE statement, is a required step to have an empty database to execute a changelog.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating a new Database Versions
&lt;/h1&gt;

&lt;p&gt;To create a new database version the following steps are required:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add a new directory for your version
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--db
|   +--changelog.yaml
|   +--v_1_0_0
|   |   +--main-changelog.sql
|   +--v_1_0_1 # new Directory
|   |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a sql file for your version
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--db
|   +--changelog.yaml
|   +--v_1_0_0
|   |   +--main-changelog.sql
|   +--v_1_0_1
|   |   +--adding-comments.sql # new version file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- adding-comments.sql&lt;/span&gt;
&lt;span class="k"&gt;COMMENT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;author_name&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="s1"&gt;'Author name'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;COMMENT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;author_id&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="s1"&gt;'Author numeric identifier'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;COMMENT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;book&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;book_id&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="s1"&gt;'Book numeric identifier'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;COMMENT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;book&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;book_isbn&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="s1"&gt;'Book International Standard Book Number'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;COMMENT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;book&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;book_name&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="s1"&gt;'Book name'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;COMMENT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;book&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;author_id&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="s1"&gt;'Author numeric identifier(author reference)'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add a reference to your new version in the changelog
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#changelog.yaml&lt;/span&gt;
&lt;span class="na"&gt;databaseChangeLog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;changeSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v_1_0_0&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Marco&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Villarreal"&lt;/span&gt;
      &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Initial&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;dummy&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;library&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;database&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;schema"&lt;/span&gt;
      &lt;span class="na"&gt;sqlFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;encoding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;utf8&lt;/span&gt;
        &lt;span class="na"&gt;relativeToChangelogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;stripComments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v_1_0_0/main-changelog.sql"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;changeSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v_1_0_1&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Marco&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Villarreal"&lt;/span&gt;
      &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt; &lt;span class="c1"&gt;# We can use yaml multi-line syntax for more descriptive changelog comments&lt;/span&gt;
        &lt;span class="s"&gt;* Adding comments for author's table&lt;/span&gt;
        &lt;span class="s"&gt;* Adding comments for book's table&lt;/span&gt;
      &lt;span class="na"&gt;sqlFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;encoding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;utf8&lt;/span&gt;
        &lt;span class="na"&gt;relativeToChangelogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;stripComments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v_1_0_1/adding-comments.sql"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Configuring automatic migrations with SpringBoot &amp;amp; Micronaut
&lt;/h1&gt;

&lt;p&gt;Now that we have our database changelog organized and ready to go, we can implement it automatically in our java projects:&lt;/p&gt;

&lt;h2&gt;
  
  
  SpringBoot configuration
&lt;/h2&gt;

&lt;p&gt;We need to add liquibase-core, spring data jpa(datasource connection) and postgresql driver to our project dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;implementation&lt;/span&gt; &lt;span class="s1"&gt;'org.liquibase:liquibase-core'&lt;/span&gt;
&lt;span class="n"&gt;implementation&lt;/span&gt; &lt;span class="s1"&gt;'org.springframework.boot:spring-boot-starter-data-jpa'&lt;/span&gt;
&lt;span class="n"&gt;runtimeOnly&lt;/span&gt; &lt;span class="s1"&gt;'org.postgresql:postgresql'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And enable liquibase beans in our project's configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;liquibase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;change-log&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;classpath:/db/changelog.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To download my setup check the &lt;a href="https://start.spring.io/#!type=gradle-project&amp;amp;language=java&amp;amp;platformVersion=2.5.0.RELEASE&amp;amp;packaging=jar&amp;amp;jvmVersion=11&amp;amp;groupId=org.mvillabe.books&amp;amp;artifactId=book-demo&amp;amp;name=book-demo&amp;amp;description=Book%20Demo%20for%20spring&amp;amp;packageName=org.mvillabe.books&amp;amp;dependencies=liquibase,postgresql,testcontainers,webflux,data-jpa" rel="noopener noreferrer"&gt;spring initializr project&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Micronaut configuration
&lt;/h2&gt;

&lt;p&gt;We need to add micronaut-liquibase and micronaut-data jpa(datasource connection)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;implementation&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"io.micronaut.liquibase:micronaut-liquibase"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;implementation&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"io.micronaut.data:micronaut-data-hibernate-jpa"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;liquibase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;datasources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;change-log&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;classpath:db/changelog.yaml'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To download my setup check the &lt;a href="https://micronaut.io/launch?javaVersion=JDK_11&amp;amp;lang=JAVA&amp;amp;build=GRADLE&amp;amp;test=JUNIT&amp;amp;name=book-demo-micronaut&amp;amp;package=org.mvillabe.books&amp;amp;type=DEFAULT&amp;amp;features=liquibase&amp;amp;features=postgres&amp;amp;features=data-jpa&amp;amp;features=testcontainers&amp;amp;version=2.5.4&amp;amp;activity=preview&amp;amp;showing=README.md" rel="noopener noreferrer"&gt;micronaut launch project&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating Database Migrations for a existing Database
&lt;/h1&gt;

&lt;p&gt;If you are on a project with an existing database with no versioning at all, fear not, we can create a changelog with liquibase, in this case we need to have &lt;a href="https://www.liquibase.org/download" rel="noopener noreferrer"&gt;liquibase installed&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Once installed we have to execute the following steps:&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the recomended folder structure
&lt;/h2&gt;

&lt;p&gt;In this case we create the db folder, an empty changelog.yaml file and v_1_0_0 directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--db
|   +--changelog.yaml
|   +--v_1_0_0
|   |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Get the changes from your existing database
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;liquibase &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;org.postgresql.Driver &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--classpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;~/classpath/postgresql-42.2.20.jar &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"jdbc:postgresql://127.0.0.1/existing_database"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--changeLogFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;db/v_1_0_0/main-changelog.postgresql.sql &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;casa1234 &lt;span class="se"&gt;\&lt;/span&gt;
generateChangeLog
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a sql based changelog, however since we are using yaml files to organize our changelog, we can use the generated sql file as an input for a yaml changelog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a changelog
&lt;/h2&gt;

&lt;p&gt;Now we create a changelog v_1_0_0 targeting the generated &lt;em&gt;v_1_0_0/main-changelog.postgresql.sql&lt;/em&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#changelog.yaml&lt;/span&gt;
&lt;span class="na"&gt;databaseChangeLog&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;changeSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v_1_0_0&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Marco&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Villarreal"&lt;/span&gt;
      &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Inherited&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;changelog&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;existing&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;database"&lt;/span&gt;
      &lt;span class="na"&gt;sqlFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;encoding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;utf8&lt;/span&gt;
        &lt;span class="na"&gt;relativeToChangelogFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;stripComments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v_1_0_0/main-changelog.postgresql.sql"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Sync the changelog to your database
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;liquibase &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;org.postgresql.Driver &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--classpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;~/classpath/postgresql-42.2.20.jar &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"jdbc:postgresql://127.0.0.1/existing_database"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--changeLogFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;db/v_1_0_0/main-changelog.postgresql.sql &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;casa1234 &lt;span class="se"&gt;\&lt;/span&gt;
changelogSync
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the executed command we successfully referenced a changelog for the existing database.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusions and Caveats
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Liquibase creates 2 additional tables; databasechangelog and databasechangeloglock:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;databasechangelog: Keep a track of each executed version&lt;/li&gt;
&lt;li&gt;databasechangeloglock: Used to lock executions and avoid conflicts at runtime.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;When using liquibase, each executed version is &lt;strong&gt;inmutable&lt;/strong&gt;, each version creates a hash of the file, if you modify it it will change the hash causing a corrupted changelog(we need to avoid this).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Having a versioned database is as good as having a versioned codebase, enables a clean tracking of your schema's evolution.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;If you feel uncomfortable adding liquibase to your project's runtime, you can use gradle or maven plugin, or even the liquibase cli itself.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Enabling tools like liquibase can be considered a "vendor locking", this is because we are creating sql scripts with vendor specific syntax. If this is a problem for you, consider enabling jpa automatic changes(of course this has it's own caveats).&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You can find the used projects in the &lt;a href="https://github.com/mvillarrealb/liquibase-demo" rel="noopener noreferrer"&gt;github repository&lt;/a&gt;&lt;/p&gt;

</description>
      <category>micronaut</category>
      <category>springboot</category>
      <category>database</category>
      <category>postgres</category>
    </item>
  </channel>
</rss>
