<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yash Thakkar</title>
    <description>The latest articles on DEV Community by Yash Thakkar (@thakkaryash94).</description>
    <link>https://dev.to/thakkaryash94</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thakkaryash94"/>
    <language>en</language>
    <item>
      <title>Setup CI/CD for Serverless monorepos application on AWS using GitHub Actions</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Sun, 29 Aug 2021 20:39:54 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/setup-ci-cd-for-serverless-monorepos-application-on-aws-using-github-actions-33hi</link>
      <guid>https://dev.to/thakkaryash94/setup-ci-cd-for-serverless-monorepos-application-on-aws-using-github-actions-33hi</guid>
      <description>&lt;p&gt;There are many ways to write server applications in many languages. There are many frameworks to help us building the server like Golang with mux, Java with Spring Boot, NodeJS with express etc. When it comes to hosting the server application and scaling the system, it requires lots of efforts, planning. Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers. So using serverless architecture model, we don't need to worry about provisioning-managing-scaling  the servers, updating the security patches, someone hacking into our server and much more. These all will be taken care by the cloud provider. So we can say that, we should try to use serverless architecture for API wherever possible.&lt;/p&gt;

&lt;p&gt;In this post, we will be talking about running JavaScript functions as an api application.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Lambda
&lt;/h3&gt;

&lt;p&gt;AWS Lambda is a serverless compute service which we will be using to deploy our JS functions. We just upload our code as a ZIP file and Lambda automatically and precisely allocates compute execution power and runs your code based on the incoming request or event, for any scale of traffic.&lt;/p&gt;

&lt;p&gt;There are many frameworks available to write NodeJS serverless application like &lt;a href="https://arc.codes" rel="noopener noreferrer"&gt;Architect&lt;/a&gt;, &lt;a href="https://apex.sh/docs/up" rel="noopener noreferrer"&gt;Up&lt;/a&gt;, &lt;a href="https://middy.js.org" rel="noopener noreferrer"&gt;Middy&lt;/a&gt; and many more. We will be using &lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt; to write our backend API application because we can use multiple programming languages and we will be using multiple other AWS services like S3, API Gateway and Serverless framework has our of the box support for it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless Framework
&lt;/h3&gt;

&lt;p&gt;The Serverless Framework helps you develop and deploy your AWS Lambda functions, along with the AWS infrastructure resources they require. It's a CLI that offers structure, automation and best practices out-of-the-box, allowing you to focus on building sophisticated, event-driven, serverless architectures, comprised of Functions and Events.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NodeJS: &amp;gt;= 12&lt;/li&gt;
&lt;li&gt;Serverless CLI: Install command  &lt;code&gt;npm install -g serverless&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will be writing simple NodeJS application with below file/folder structure or you can run &lt;code&gt;serverless&lt;/code&gt; command and setup a new project from template as well. You can clone the demo code from &lt;a href="https://github.com/thakkaryash94/aws-serverless-ci-cd-demo" rel="noopener noreferrer"&gt;https://github.com/thakkaryash94/aws-serverless-ci-cd-demo&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── handler.js
├── package-lock.json
├── package.json
└── serverless.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;handler.js&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use strict&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello from new service A!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will be updating our &lt;code&gt;serverless.yml&lt;/code&gt; file as below. We will be using AWS S3 to store the function zip and AWS API Gateway service to access our function as an API URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;serverless.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;service: servicea

frameworkVersion: &lt;span class="s2"&gt;"2"&lt;/span&gt;

provider:
  name: aws
  runtime: nodejs14.x
  lambdaHashingVersion: 20201221
  region: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;env&lt;/span&gt;:AWS_REGION&lt;span class="k"&gt;}&lt;/span&gt;
  apiGateway:
    restApiId: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;env&lt;/span&gt;:AWS_REST_API_ID&lt;span class="k"&gt;}&lt;/span&gt;
    restApiRootResourceId: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;env&lt;/span&gt;:AWS_REST_API_ROOT_ID&lt;span class="k"&gt;}&lt;/span&gt;
  &lt;span class="c"&gt;# delete below section if you don't want to keep the Lambda function zip in a bucket&lt;/span&gt;
  deploymentBucket:
    blockPublicAccess: &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="c"&gt;# Prevents public access via ACLs or bucket policies. Default is false&lt;/span&gt;
    skipPolicySetup: &lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="c"&gt;# Prevents creation of default bucket policy when framework creates the deployment bucket. Default is false&lt;/span&gt;
    name: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;env&lt;/span&gt;:AWS_BUCKET_NAME&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="c"&gt;# Deployment bucket name. Default is generated by the framework&lt;/span&gt;
    maxPreviousDeploymentArtifacts: 10 &lt;span class="c"&gt;# On every deployment the framework prunes the bucket to remove artifacts older than this limit. The default is 5&lt;/span&gt;

functions:
  main:
    handler: handler.main &lt;span class="c"&gt;# Function name&lt;/span&gt;
    memorySize: 128
    events:
      - http:
          path: servicea &lt;span class="c"&gt;# URL path to access the function&lt;/span&gt;
          method: get &lt;span class="c"&gt;# Method name for API gateway&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can run below command to execute the function locally. It will give you warning about missing environment variables like AWS_REGION, AWS_BUCKET_NAME etc but we can ignore them or you can comment them as well for the local development purpose.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;serverless invoke &lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will return the response as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;statusCode&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;body&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;{&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Hello from new service A!&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;input&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"\"\n&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So it means our serverless application is ready and working correctly locally. For the real project, we will need many of these applications. So each application will serve on individual API request.&lt;/p&gt;

&lt;p&gt;We keep our code on VCS providers like GitHub, GitLab, BitBucket etc. The problem is it will be very difficult to maintain many repositories like below even for a single project. So to fix that, we can store all these applications in one repository and it will be called as Monorepo. That's the difference between monorepo and multirepo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Monorepo
&lt;/h3&gt;

&lt;p&gt;To convert our serverless, we just need to create a folder eg. &lt;code&gt;aws-serverless-ci-cd-demo&lt;/code&gt; and move the folder services inside it.Now, we can have as many functions as we need for our project and they all will be inside a single repository.&lt;/p&gt;

&lt;p&gt;Final overall structure will look like below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws-serverless-ci-cd-demo
├── README.md
├── servicea
│   ├── handler.js
│   ├── package-lock.json
│   ├── package.json
│   └── serverless.yml
└── serviceb
    ├── handler.js
    ├── package-lock.json
    ├── package.json
    └── serverless.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will be using AWS Lambda, AWS S3 and API Gateway to deploy and access our services. So let's discuss about the use of AWS S3 and API Gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS S3
&lt;/h3&gt;

&lt;p&gt;Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.&lt;/p&gt;

&lt;p&gt;We will store our Lambda function code as a zip in the bucket. The advantage is we can keep the track of Lambda functions with code and date-time. It is a totally optional step. You can skip it by commenting, deleting the &lt;code&gt;deploymentBucket&lt;/code&gt; code in &lt;strong&gt;serverless.yml&lt;/strong&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS API Gateway
&lt;/h3&gt;

&lt;p&gt;Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services.&lt;/p&gt;

&lt;p&gt;We will use API Gateway to expose our lambda function to a url so that we can access it. Serverless framework will use the &lt;code&gt;path&lt;/code&gt;, &lt;code&gt;method&lt;/code&gt; from &lt;strong&gt;http&lt;/strong&gt; section and setup route based on it. In the demo, it will create a &lt;code&gt;/servicea&lt;/code&gt; route with &lt;code&gt;GET&lt;/code&gt; method. Serverless framework will map this route with our Lambda function.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD Deployment
&lt;/h3&gt;

&lt;p&gt;The "CI" in CI/CD refers to continuous integration, which is an automation process to built, tested the code when developers push the code. In our application, creating a zip file of the functions.&lt;/p&gt;

&lt;p&gt;The "CD" in CI/CD refers to continuous delivery and/or continuous deployment, which means deliver/deploy the code to the development, staging, uat, qa, production environment. In our application, it means deploying the zip to Lambda function and configuring API Gateway.&lt;/p&gt;

&lt;p&gt;Fortunetly, Serverless framework has inbuild support for it. So all we need to do is proving aws credentials as environment variables like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY. We will need few more variables because we are using other services like S3 and API Gateway.&lt;/p&gt;

&lt;p&gt;We will be using GitHub actions to build, package and deploy our functions. So let's see how we can implement CI/CD using GitHub actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Actions
&lt;/h3&gt;

&lt;p&gt;GitHub Actions help you automate tasks within your software development life cycle. GitHub Actions are event-driven, meaning that you can run a series of commands after a specified event has occurred. For example, every time someone creates a pull request for a repository, you can automatically run a command that executes a software testing script.&lt;/p&gt;

&lt;p&gt;We can configure GitHub actions based on multiple triggers like on &lt;code&gt;push&lt;/code&gt;, &lt;code&gt;pull_request&lt;/code&gt; etc and as per different branches as well.&lt;/p&gt;

&lt;p&gt;There are 2 ways to deploy the serverless functions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Auto Deployment&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Trigger setup
&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  name: Auto Serverless Deployment
  on: [push, pull_request]
&lt;/code&gt;&lt;/pre&gt;



&lt;ol&gt;
&lt;li&gt;The first job is to detech the changes in repo files and folders and return the list. Here we will use git commit diff between current and last commit and use a filter which will return only added or updated files list. Then we will go through each file, collect only unique folders name and return as output.
&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;jobs&lt;/span&gt;:
    changes:
      name: Changes
      runs-on: ubuntu-latest
      outputs:
        folders: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ steps.filter.outputs.folders &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
      steps:
        - uses: actions/checkout@v2
        - name: Check changed files
          &lt;span class="nb"&gt;id&lt;/span&gt;: diff
          run: |
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
              &lt;span class="c"&gt;# Pull Request&lt;/span&gt;
              git fetch origin &lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt; &lt;span class="nt"&gt;--depth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
              &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DIFF&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; git diff &lt;span class="nt"&gt;--name-only&lt;/span&gt; origin/&lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt; &lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;
              &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Diff between origin/&lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt;&lt;span class="s2"&gt; and &lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="k"&gt;else&lt;/span&gt;
              &lt;span class="c"&gt;# Push&lt;/span&gt;
              git fetch origin &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.before &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--depth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
              &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DIFF&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; git diff &lt;span class="nt"&gt;--diff-filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;d &lt;span class="nt"&gt;--name-only&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.before &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;
              &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Diff between &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.before &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;} and &lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="k"&gt;fi
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="c"&gt;# Escape newlines (replace \n with %0A)&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"::set-output name=diff::&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;':a;N;$!ba;s/\n/%0A/g'&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        - name: Set matrix &lt;span class="k"&gt;for &lt;/span&gt;build
          &lt;span class="nb"&gt;id&lt;/span&gt;: filter
          run: |
            &lt;span class="nv"&gt;DIFF&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ steps.diff.outputs.diff &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
              &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"::set-output name=folders::[]"&lt;/span&gt;
            &lt;span class="k"&gt;else
              &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"["&lt;/span&gt;
              &lt;span class="c"&gt;# Loop by lines&lt;/span&gt;
              &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read &lt;/span&gt;path&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
                &lt;span class="c"&gt;# Set $directory to substring before /&lt;/span&gt;
                &lt;span class="nv"&gt;directory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$path&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="s1"&gt;'/'&lt;/span&gt; &lt;span class="nt"&gt;-f1&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

              &lt;span class="c"&gt;# ignore .github folder&lt;/span&gt;
              &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$directory&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s2"&gt;".github"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
                &lt;span class="c"&gt;# Add build to the matrix only if it is not already included&lt;/span&gt;
                &lt;span class="nv"&gt;JSONline&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$directory&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,"&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSONline&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
                  &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON$JSONline&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
                &lt;span class="k"&gt;fi
              fi
              done&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

              &lt;span class="c"&gt;# Remove last "," and add closing brackets&lt;/span&gt;
              &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nv"&gt;$JSON&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;, &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
                &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;%?&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
              &lt;span class="k"&gt;fi
              &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON&lt;/span&gt;&lt;span class="s2"&gt;]"&lt;/span&gt;
              &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$JSON&lt;/span&gt;

              &lt;span class="c"&gt;# Set output&lt;/span&gt;
              &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"::set-output name=folders::&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;After adding a serverless service &lt;code&gt;servicea&lt;/code&gt;, the build action will print something like below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check changed files&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;  Diff between 3da227687e19da14062916c6f71cef0c7e3f9033 and 96a8e3a39ab79ccff3a294ea485c4c3854d496c6
  servicea/.gitignore
  servicea/handler.js
  servicea/package-lock.json
  servicea/package.json
  servicea/serverless.yml
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;strong&gt;Set matrix for build&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"servicea"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;ol&gt;
&lt;li&gt;Next, we will create a job for every folder name using matrix strategy as below.
&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;    deploy:
      needs: changes
      name: Deploy
      &lt;span class="k"&gt;if&lt;/span&gt;: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ needs.changes.outputs.folders != &lt;/span&gt;&lt;span class="s1"&gt;'[]'&lt;/span&gt;&lt;span class="p"&gt; &amp;amp;&amp;amp; needs.changes.outputs.folders != &lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
      strategy:
        matrix:
          &lt;span class="c"&gt;# Parse JSON array containing names of all filters matching any of changed files&lt;/span&gt;
          &lt;span class="c"&gt;# e.g. ['servicea', 'serviceb'] if both package folders contains changes&lt;/span&gt;
          folder: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ fromJSON(needs.changes.outputs.folders) &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;ol&gt;
&lt;li&gt;Now, it's time to build and deploy the function. Please define all the env variables used in serverless.yml file in env section as below. So here, we will go through every folder and run &lt;code&gt;npx serverless deploy&lt;/code&gt;. This command will create a zip, update it to S3, create/update the Lambda and finally configuring it with API Gateway.
&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@v2
        - name: Configure AWS Credentials
          uses: aws-actions/configure-aws-credentials@v1
          with:
            aws-access-key-id: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_ACCESS_KEY_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            aws-secret-access-key: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_SECRET_ACCESS_KEY &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            aws-region: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REGION &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
        - name: deploy
          run: npx serverless deploy
          working-directory: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ matrix.folder &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          &lt;span class="nb"&gt;env&lt;/span&gt;:
            AWS_ACCESS_KEY_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_ACCESS_KEY_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_SECRET_ACCESS_KEY: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_SECRET_ACCESS_KEY &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_REST_API_ROOT_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ROOT_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_REST_API_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_BUCKET_NAME: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_BUCKET_NAME &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Before we push the code and action start to build and deploy the code, we need to add below secret environment variables. You should never use root account and always create a new user with restricted permissions based on use cases. In this process, our user will need access to Lambda, S3 write access and API Gateway access.&lt;/p&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - AWS_ACCESS_KEY_ID
  - AWS_SECRET_ACCESS_KEY
  - AWS_REGION
  - AWS_REST_API_ROOT_ID
  - AWS_REST_API_ID
  - AWS_BUCKET_NAME: bucket name where we want our zip files to be stored, if you are ignoring `deploymentBucket` from `serverless.yml` file, you can ignore this variable as well.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  auto.yml
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;name: Auto Serverless Deployment
on: &lt;span class="o"&gt;[&lt;/span&gt;push, pull_request]

&lt;span class="nb"&gt;jobs&lt;/span&gt;:
  changes:
    name: Changes
    runs-on: ubuntu-latest
    outputs:
      folders: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ steps.filter.outputs.folders &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
    steps:
      - uses: actions/checkout@v2
      - name: Check changed files
        &lt;span class="nb"&gt;id&lt;/span&gt;: diff
        run: |
          &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
            &lt;span class="c"&gt;# Pull Request&lt;/span&gt;
            git fetch origin &lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt; &lt;span class="nt"&gt;--depth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
            &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DIFF&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; git diff &lt;span class="nt"&gt;--name-only&lt;/span&gt; origin/&lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt; &lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Diff between origin/&lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_BASE_REF&lt;/span&gt;&lt;span class="s2"&gt; and &lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="k"&gt;else&lt;/span&gt;
            &lt;span class="c"&gt;# Push&lt;/span&gt;
            git fetch origin &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.before &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--depth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
            &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DIFF&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; git diff &lt;span class="nt"&gt;--diff-filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;d &lt;span class="nt"&gt;--name-only&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.before &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Diff between &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.before &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;} and &lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_SHA&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="k"&gt;fi
          &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="c"&gt;# Escape newlines (replace \n with %0A)&lt;/span&gt;
          &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"::set-output name=diff::&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;':a;N;$!ba;s/\n/%0A/g'&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      - name: Set matrix &lt;span class="k"&gt;for &lt;/span&gt;build
        &lt;span class="nb"&gt;id&lt;/span&gt;: filter
        run: |
          &lt;span class="nv"&gt;DIFF&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ steps.diff.outputs.diff &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt;

          &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"::set-output name=folders::[]"&lt;/span&gt;
          &lt;span class="k"&gt;else
            &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"["&lt;/span&gt;
            &lt;span class="c"&gt;# Loop by lines&lt;/span&gt;
            &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read &lt;/span&gt;path&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
              &lt;span class="c"&gt;# Set $directory to substring before /&lt;/span&gt;
              &lt;span class="nv"&gt;directory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$path&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="s1"&gt;'/'&lt;/span&gt; &lt;span class="nt"&gt;-f1&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

            &lt;span class="c"&gt;# ignore .github folder&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$directory&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s2"&gt;".github"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
              &lt;span class="c"&gt;# Add build to the matrix only if it is not already included&lt;/span&gt;
              &lt;span class="nv"&gt;JSONline&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$directory&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,"&lt;/span&gt;
              &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSONline&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
                &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON$JSONline&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
              &lt;span class="k"&gt;fi
            fi
            done&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

            &lt;span class="c"&gt;# Remove last "," and add closing brackets&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nv"&gt;$JSON&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;, &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
              &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;%?&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="k"&gt;fi
            &lt;/span&gt;&lt;span class="nv"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON&lt;/span&gt;&lt;span class="s2"&gt;]"&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$JSON&lt;/span&gt;

            &lt;span class="c"&gt;# Set output&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"::set-output name=folders::&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="k"&gt;fi
  &lt;/span&gt;deploy:
    needs: changes
    name: Deploy
    &lt;span class="k"&gt;if&lt;/span&gt;: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ needs.changes.outputs.folders != &lt;/span&gt;&lt;span class="s1"&gt;'[]'&lt;/span&gt;&lt;span class="p"&gt; &amp;amp;&amp;amp; needs.changes.outputs.folders != &lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
    strategy:
      matrix:
        &lt;span class="c"&gt;# Parse JSON array containing names of all filters matching any of changed files&lt;/span&gt;
        &lt;span class="c"&gt;# e.g. ['servicea', 'serviceb'] if both package folders contains changes&lt;/span&gt;
        folder: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ fromJSON(needs.changes.outputs.folders) &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_ACCESS_KEY_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          aws-secret-access-key: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_SECRET_ACCESS_KEY &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          aws-region: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REGION &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
      - name: deploy
        run: npx serverless deploy
        working-directory: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ matrix.folder &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;env&lt;/span&gt;:
          AWS_ACCESS_KEY_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_ACCESS_KEY_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_SECRET_ACCESS_KEY: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_SECRET_ACCESS_KEY &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_REST_API_ROOT_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ROOT_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_REST_API_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_BUCKET_NAME: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_BUCKET_NAME &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97rujnbmrfqui68t6lu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97rujnbmrfqui68t6lu7.png" alt="Logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0iyjblxizcnii77ytuc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0iyjblxizcnii77ytuc.png" alt="Pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Manual&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sometimes, we want to deploy a function manually or it may be skipped by above script, we can deploy it manually because deploying it is more important than finding the issue at that time.&lt;/p&gt;

&lt;p&gt;Here, we will skip the step to identify the files using git diff and returning the folder. We can directly go inside the function folder name and run the deployment command eg. &lt;code&gt;npx serverless deploy&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;For manual deployment, we will take the function name as action input and deploy the specific function rather than deploying manually.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  name: Manual Serverless Deployment
  on:
    push:
      branches:
        - main
    pull_request:
      branches:
        - main
    workflow_dispatch:
      inputs:
        &lt;span class="k"&gt;function&lt;/span&gt;:
          description: &lt;span class="s2"&gt;"Function name"&lt;/span&gt;
          required: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After this, we will use it in our job as below.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;jobs&lt;/span&gt;:
    deploy:
      &lt;span class="k"&gt;if&lt;/span&gt;: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event_name == &lt;/span&gt;&lt;span class="s1"&gt;'workflow_dispatch'&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
      name: deploy
      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@master
        - name: deploy
          run: npx serverless deploy
          working-directory: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.inputs.function &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          &lt;span class="nb"&gt;env&lt;/span&gt;:
            AWS_ACCESS_KEY_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_ACCESS_KEY_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_SECRET_ACCESS_KEY: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_SECRET_ACCESS_KEY &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_REST_API_ROOT_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ROOT_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_REST_API_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
            AWS_BUCKET_NAME: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_BUCKET_NAME &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  manual.yml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;name: Manual Serverless Deployment
on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
  workflow_dispatch:
    inputs:
      &lt;span class="k"&gt;function&lt;/span&gt;:
        description: &lt;span class="s2"&gt;"Function name"&lt;/span&gt;
        required: &lt;span class="nb"&gt;true
jobs&lt;/span&gt;:
  deploy:
    &lt;span class="k"&gt;if&lt;/span&gt;: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event_name == &lt;/span&gt;&lt;span class="s1"&gt;'workflow_dispatch'&lt;/span&gt;&lt;span class="p"&gt; &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
    name: deploy
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@master
      - name: deploy
        run: npx serverless deploy
        working-directory: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.event.inputs.function &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;env&lt;/span&gt;:
          AWS_ACCESS_KEY_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_ACCESS_KEY_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_SECRET_ACCESS_KEY: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_SECRET_ACCESS_KEY &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_REST_API_ROOT_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ROOT_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_REST_API_ID: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_REST_API_ID &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
          AWS_BUCKET_NAME: &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ secrets.AWS_BUCKET_NAME &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we saw, how we can setup auto, manual deployment for serverless application with CI/CD.&lt;/p&gt;

&lt;h3&gt;
  
  
  Links:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://serverless.com/framework/" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.serverless.com/blog/cicd-for-monorepos" rel="noopener noreferrer"&gt;CI/CD for monorepos&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>github</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Postgres backup using Docker</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Sun, 17 Jan 2021 21:07:28 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/postgres-backup-using-docker-2pjh</link>
      <guid>https://dev.to/thakkaryash94/postgres-backup-using-docker-2pjh</guid>
      <description>&lt;p&gt;Postgres is one of the most popular open-source Relational database. You can read more about it &lt;a href="https://www.postgresql.org/"&gt;here&lt;/a&gt;. The purpose of this blog to explain how you can take a backup of your Postgres database running with and without docker using the docker image and why I have created this image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;Running Postgres inside the docker is very easy. Docker hub has an official &lt;a href="https://hub.docker.com/_/postgres"&gt;postgres image&lt;/a&gt; that we can run. With a single command, we can start using Postgres database. The issue happens when we want to run a backup with cron. There are many ways to do this. We can follow &lt;a href="https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux"&gt;official documentation&lt;/a&gt; as well to automate the backup on Linux. The advantage of using docker is flexibility and no platform dependency. The official document only shows us how to run cron job on Linux. This will add the os level of dependency that we wanted to ignore. When searching for the backup docker image, I was not able to find any Docker image that supports Postgres 13 with S3 backup support. Existing images are not up-to-date.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;To tackle the problem, I have created my own docker image. To start the backup, follow the below instruction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Required Environment Variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All the environment variables from &lt;a href="https://www.postgresql.org/docs/current/libpq-envars.html"&gt;https://www.postgresql.org/docs/current/libpq-envars.html&lt;/a&gt; are supported because we are using native postgres-client binary for backup.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PGHOST: behaves the same as the &lt;a href="https://www.postgresql.org/docs/9.3/libpq-connect.html#LIBPQ-CONNECT-HOST"&gt;host&lt;/a&gt; connection parameter. eg. postgresql&lt;/li&gt;
&lt;li&gt;PGHOSTADDR: behaves the same as the &lt;a href="https://www.postgresql.org/docs/9.3/libpq-connect.html#LIBPQ-CONNECT-HOSTADDR"&gt;hostaddr&lt;/a&gt; connection parameter. This can be set instead of or in addition to &lt;code&gt;PGHOST&lt;/code&gt; to avoid DNS lookup overhead.&lt;/li&gt;
&lt;li&gt;PGHOST: postgresql&lt;/li&gt;
&lt;li&gt;PGPORT: 5432&lt;/li&gt;
&lt;li&gt;PGDATABASE: database&lt;/li&gt;
&lt;li&gt;PGUSER: postgres&lt;/li&gt;
&lt;li&gt;PGPASSWORD: password&lt;/li&gt;
&lt;li&gt;S3_HOST: &lt;a href="https://storage.googleapis.com"&gt;https://storage.googleapis.com&lt;/a&gt; || s3.eu-west-1.amazonaws.com || nyc3.digitaloceanspaces.com&lt;/li&gt;
&lt;li&gt;S3_BUCKET: BUCKET&lt;/li&gt;
&lt;li&gt;S3_ACCESS_KEY: ACCESS_KEY&lt;/li&gt;
&lt;li&gt;S3_SECRET_KEY: SECRET_KEY&lt;/li&gt;
&lt;li&gt;CRON_SCHEDULE: "* * * * *". Read more &lt;a href="https://dev.tohere"&gt;https://crontab.guru/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Run Command&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--name&lt;/span&gt; postgres-backup &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/backups:/backups &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PGHOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgresql
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PGPORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5432
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PGDATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;db_name
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PGUSER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PGPASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;password
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;S3_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ACCESS_KEY
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;S3_SECRET_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SECRET_KEY
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;BUCKET
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;S3_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://storage.googleapis.com &lt;span class="o"&gt;||&lt;/span&gt; s3.eu-west-1.amazonaws.com &lt;span class="o"&gt;||&lt;/span&gt; nyc3.digitaloceanspaces.com
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;CRON_SCHEDULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"@daily"&lt;/span&gt;
      docker.pkg.github.com/thakkaryash94/docker-postgres-backup/docker-postgres-backup:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it, now the container should be up and running.&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/thakkaryash94/docker-postgres-backup"&gt;Github Repo&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.postgresql.org/docs/current/libpq-envars.html"&gt;Environment variables&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>postgres</category>
      <category>docker</category>
      <category>s3</category>
      <category>backup</category>
    </item>
    <item>
      <title>Build NextJS Application Using GitHub Workflow and Docker</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Mon, 09 Nov 2020 07:09:24 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/build-nextjs-application-using-github-workflow-and-docker-3foj</link>
      <guid>https://dev.to/thakkaryash94/build-nextjs-application-using-github-workflow-and-docker-3foj</guid>
      <description>&lt;p&gt;NextJS is a JavaScript framework created by vercel. It lets you build serverless API, server-side rendering and static web applications using React. Vercel provides the out of box CI/CD integration with GitHub, GitLab, and BitHub. But sometimes, we want to host our NextJS application on other platforms than vercel, like AWS, GCP, DigitalOcean or Azure. In this blog, we will see how we can build our NextJS application using GitHub Workflow and Docker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup NextJS Application
&lt;/h3&gt;

&lt;p&gt;NextJS recommends using &lt;code&gt;create-next-app&lt;/code&gt;, which sets up everything automatically for you. To create a project, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-next-app
&lt;span class="c"&gt;# or&lt;/span&gt;
yarn create next-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the installation is completed, follow the instructions to start the development server. Try editing &lt;code&gt;pages/index.js&lt;/code&gt; and see the result on your browser.&lt;/p&gt;

&lt;p&gt;For more information on how to use &lt;code&gt;create-next-app&lt;/code&gt;, you can review the &lt;a href="https://nextjs.org/docs/api-reference/create-next-app" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Dockerfile
&lt;/h3&gt;

&lt;p&gt;We will package our NextJS application in Docker image. The reason for using Docker is we won't need to install any additional packages like nodejs, pm2 etc when we want to run our NextJS server. Docker will bundle everything up and give us the image that we can run anywhere. Below is the sample Dockerfile for our NextJS application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM node:lts-alpine

ENV NODE_ENV production
ENV NPM_CONFIG_LOGLEVEL warn

RUN &lt;span class="nb"&gt;mkdir&lt;/span&gt; /home/node/app/ &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; node:node /home/node/app

WORKDIR /home/node/app

COPY package.json package.json
COPY package-lock.json package-lock.json

USER node

RUN npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--production&lt;/span&gt;

COPY &lt;span class="nt"&gt;--chown&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node:node .next .next
COPY &lt;span class="nt"&gt;--chown&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node:node public public

EXPOSE 3000

CMD npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's see what is happening in above Dockerfile step-by-step.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We are using node:lts-alpine as the base image.&lt;/li&gt;
&lt;li&gt;Setting environment variable as &lt;code&gt;production&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Setting up an &lt;code&gt;app&lt;/code&gt; folder with &lt;code&gt;node&lt;/code&gt; user as owner.&lt;/li&gt;
&lt;li&gt;Copying package.json and package-lock.json into the image.&lt;/li&gt;
&lt;li&gt;Running &lt;code&gt;npm install production&lt;/code&gt; to install only production dependencies.&lt;/li&gt;
&lt;li&gt;Copying &lt;code&gt;.next&lt;/code&gt; and &lt;code&gt;public&lt;/code&gt; folder into the container. This is a very interesting step. Why are we copying the folders and not building the the application using &lt;code&gt;next build&lt;/code&gt; command? We will discuss this in detail below.&lt;/li&gt;
&lt;li&gt;Exposing port 3000, so that our application can be accessible out of the container.&lt;/li&gt;
&lt;li&gt;Finally, running &lt;code&gt;npm start&lt;/code&gt; command to start our NextJS application server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can see, we are not making any changes in the Dockerfile. It's easy to understand and straightforward. The interesting part is we are copying &lt;code&gt;.next&lt;/code&gt; and &lt;code&gt;public&lt;/code&gt; folder into the container, instead of building inside the container.&lt;/p&gt;

&lt;p&gt;Here is the detailed explanation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In NextJS application, we may need to use NEXT_PUBLIC environment variables. NEXT_PUBLIC variables are required for the build time process. (eg. firebase web client)&lt;/li&gt;
&lt;li&gt;If we use a firebase web client, then we need to provide a few required variables like firebase api_key, app_id, auth_domain.&lt;/li&gt;
&lt;li&gt;We write these variables in &lt;code&gt;.env&lt;/code&gt; or &lt;code&gt;.env.local&lt;/code&gt; file when developing our application locally. But we DO NOT, SHOULD NOT and MUST NOT push this file on VCS systems like git.&lt;/li&gt;
&lt;li&gt;So when we build our application locally, it will use these variables from the &lt;code&gt;.env&lt;/code&gt; and process gets completed without any error. But when we build our application in Docker using &lt;code&gt;RUN next build&lt;/code&gt; command, our build command will fail because we are not providing these variables in the docker image.&lt;/li&gt;
&lt;li&gt;If we want to build our NextJS application inside docker build process, we need to use &lt;code&gt;--build-args&lt;/code&gt; in docker build command to pass the build-time variables. There are 2 ways to do this.

&lt;ol&gt;
&lt;li&gt;We use ci secret variables and pass them into the docker build command&lt;/li&gt;
&lt;li&gt;We create a &lt;code&gt;.env&lt;/code&gt; file, encode tis using base64, pass it as ci secret variable, decode it using base64 inside docker file and then build the docker image.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;This will become very difficult to pass and maintain if our public variables list grows in the future.&lt;/li&gt;
&lt;li&gt;So to not complicate the build process, we will build our application outside the docker image using ci job and then copy the &lt;code&gt;.next&lt;/code&gt;, &lt;code&gt;public&lt;/code&gt; folders into a docker image.&lt;/li&gt;
&lt;li&gt;To pass environment variables in ci, there are 2 ways.

&lt;ol&gt;
&lt;li&gt;Pass the environment variables as secrets&lt;/li&gt;
&lt;li&gt;Pass the base64 encoded of &lt;code&gt;.env&lt;/code&gt; file, decode it inside ci process, write the file at the root of our project folder, same as local development and build our application.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  GtiHub Workflow
&lt;/h3&gt;

&lt;p&gt;A workflow is a configurable automated process made up of one or more jobs. We will configure the workflow with YAML file. You can read more &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As discussed above, we will use GitHub workflow jobs to build our NextJS application. Below is the workflow file, we will be using the same. Save this file at &lt;code&gt;PROJECT_ROOT_FOLDER/.github/workflows/main.yml&lt;/code&gt;, so that GitHub can read the yaml file and setup actions accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: To be able to see the actions in the UI, you need to have the same file available in &lt;code&gt;master&lt;/code&gt; or &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;h4&gt;
  
  
  Workflow file:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build &amp;amp; Publish&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;**"&lt;/span&gt;             &lt;span class="c1"&gt;# all branches&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;!dependabot/**"&lt;/span&gt;      &lt;span class="c1"&gt;# exclude dependbot branches&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      &lt;span class="c1"&gt;# Manually run the workflow&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;next-build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event_name == 'workflow_dispatch' }}&lt;/span&gt;       &lt;span class="c1"&gt;# Run only if triggered manually&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:lts&lt;/span&gt;          &lt;span class="c1"&gt;# Use node LTS container version, same as Dockerfile base image&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;       &lt;span class="c1"&gt;# Checkout the code&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;            &lt;span class="c1"&gt;#install dependencies&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run build&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;NEXT_PUBLIC_FIREBASE_API_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{secrets.NEXT_PUBLIC_FIREBASE_API_KEY}}&lt;/span&gt;
          &lt;span class="na"&gt;NEXT_PUBLIC_FIREBASE_APP_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{secrets.NEXT_PUBLIC_FIREBASE_APP_ID}}&lt;/span&gt;
          &lt;span class="na"&gt;NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{secrets.NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN}}&lt;/span&gt;
          &lt;span class="na"&gt;NEXT_PUBLIC_FIREBASE_PROJECT_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{secrets.NEXT_PUBLIC_FIREBASE_PROJECT_ID}}&lt;/span&gt;
          &lt;span class="na"&gt;NEXT_PUBLIC_SENTRY_DSN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{secrets.NEXT_PUBLIC_SENTRY_DSN}}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload Next build&lt;/span&gt;          &lt;span class="c1"&gt;# Upload the artifact&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;.next&lt;/span&gt;
            &lt;span class="s"&gt;public&lt;/span&gt;
          &lt;span class="na"&gt;retention-days&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;7&lt;/span&gt;         &lt;span class="c1"&gt;# artifact retention duration, can be upto 30 days&lt;/span&gt;
  &lt;span class="na"&gt;docker-push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;next-build&lt;/span&gt;        &lt;span class="c1"&gt;# Job depends on next-build(above) job&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Download next build&lt;/span&gt;       &lt;span class="c1"&gt;# Download the above uploaded artifact&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/download-artifact@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to GitHub Container Registry&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.repository_owner }}&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.CR_PAT }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Push Docker Images&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;export CURRENT_BRANCH=${GITHUB_REF#refs/heads/}&lt;/span&gt;
          &lt;span class="s"&gt;export TAG=$([[ $CURRENT_BRANCH == "main" ]] &amp;amp;&amp;amp; echo "latest" || echo $CURRENT_BRANCH)&lt;/span&gt;
          &lt;span class="s"&gt;export GITHUB_REF_IMAGE=ghcr.io/$GITHUB_REPOSITORY:$GITHUB_SHA&lt;/span&gt;
          &lt;span class="s"&gt;export GITHUB_BRANCH_IMAGE=ghcr.io/$GITHUB_REPOSITORY:$TAG&lt;/span&gt;
          &lt;span class="s"&gt;docker build -t $GCR_IMAGE -t $GITHUB_REF_IMAGE -t $GITHUB_BRANCH_IMAGE .&lt;/span&gt;
          &lt;span class="s"&gt;echo "Pushing Image to GitHub Container Registry"&lt;/span&gt;
          &lt;span class="s"&gt;docker push $GITHUB_REF_IMAGE&lt;/span&gt;
          &lt;span class="s"&gt;docker push $GITHUB_BRANCH_IMAGE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's discuss what is happening in yaml file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to pass the condition on which event we want to trigger our workflow. In our case, we want it on the push event. It can multiple as well like &lt;code&gt;[push, pull_request]&lt;/code&gt;. You can read more &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;We can define the branches, we want this workflow to run to watch. &lt;strong&gt;!&lt;/strong&gt; means want to exclude these branches.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;workflow_dispatch&lt;/code&gt; to manually run the build process. If we don't write this, our workflow will run every-time we push to any branch of our repository. You can read more &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows#manual-events" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;We have divided our build process into 2 jobs.

&lt;ol&gt;
&lt;li&gt;next-build:

&lt;ul&gt;
&lt;li&gt;In this job, we are using &lt;code&gt;node:lts&lt;/code&gt; as the base image, this has to be the same as Dockerfile base image&lt;/li&gt;
&lt;li&gt;We are keeping this job manual, as we don't want this job to run everytime we push the code. So we add &lt;code&gt;if: ${{ github.event_name == 'workflow_dispatch' }}&lt;/code&gt; condition in step.&lt;/li&gt;
&lt;li&gt;In &lt;code&gt;env&lt;/code&gt; section, we are exporting environment variables from secrets. So we need to add these variables in GitHub project secrets. Read more &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets" rel="noopener noreferrer"&gt;here&lt;/a&gt; on how to do it.&lt;/li&gt;
&lt;li&gt;In next step, action will checkout the code,  run &lt;code&gt;npm ci&lt;/code&gt; to install dependencies and &lt;code&gt;npm run build&lt;/code&gt; to build the NextJS application using exported environment variables.&lt;/li&gt;
&lt;li&gt;Finally, after a successful build, CI job will use &lt;code&gt;actions/upload-artifact@v2&lt;/code&gt; action to upload our build folder as an artifact on GitHub with 7 days of retention time, so that ci job can download the same folder in &lt;strong&gt;docker-build&lt;/strong&gt; job and use it to build the image. In the build folder, we are including &lt;code&gt;.next&lt;/code&gt; and &lt;code&gt;public&lt;/code&gt; folder. &lt;code&gt;.next&lt;/code&gt; folder is generated by the build process and we use public folder for assets like, svgs, images etc. So we want to keep that folder as well.&lt;/li&gt;
&lt;li&gt;You can see the folder in action detail as below.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;docker-push: To build our docker image

&lt;ul&gt;
&lt;li&gt;This job depends on &lt;code&gt;needs:next-build&lt;/code&gt;, which means we will only see this job after a successful &lt;code&gt;next-build&lt;/code&gt; job. If we don't write this, then our both the job will parallel and this job will fail because it won't be able to download &lt;code&gt;build&lt;/code&gt; artifact. &lt;code&gt;next build&lt;/code&gt; will upload the artifact then only, ci job and we will be able to access it. So we need to write this, it will create a sequential job, instead of parallel.&lt;/li&gt;
&lt;li&gt;CI job will checkout the code, download the build artifact folder using &lt;code&gt;actions/download-artifact@v2&lt;/code&gt; and extract it as well.&lt;/li&gt;
&lt;li&gt;We want to keep our docker image to be hosted on GitHub packages, for that,  we will use &lt;code&gt;docker/login-action@v1&lt;/code&gt; action to login into the GitHub Container Registry server using username and password. We need to pass &lt;code&gt;CR_PAT&lt;/code&gt; as well in repository secrets same as NEXT_PUBLIC vars. We can add here other registries as well like GCR, AWS ECR etc.&lt;/li&gt;
&lt;li&gt;Next, ci job will get the &lt;code&gt;CURRENT_BRANCH&lt;/code&gt; and tag our docker build accordingly. Here, we are creating 2 tags, one is with branch name like &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;qa&lt;/code&gt;, &lt;code&gt;uat&lt;/code&gt;, &lt;code&gt;main&lt;/code&gt; and another is with commit SHA.&lt;/li&gt;
&lt;li&gt;after that, the job will start building our docker images and push it to GitHub packages after a successful build. Here, we can push it to other registries as well like GCR, AWS ECR etc.&lt;/li&gt;
&lt;li&gt;finally, this job will exit and our workflow will be successfully passed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;To run the job, we have to navigate to repo actions and you will see the workflow with &lt;code&gt;Build &amp;amp; Push&lt;/code&gt; on the left sidebar. Click on that link and you will be able to see the screen as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fctu82o8ig9xuk3xleptd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fctu82o8ig9xuk3xleptd.png" alt="GitHub Workflow Build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, We will be able to build and package our NextJS application. You will see the action screen below the screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7ocg6k43egzxnypeb4jf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7ocg6k43egzxnypeb4jf.png" alt="GitHub Workflow Build Complete"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading. Have a great day!&lt;/p&gt;

&lt;p&gt;Help Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://nextjs.org/docs/api-reference/create-next-app" rel="noopener noreferrer"&gt;https://nextjs.org/docs/api-reference/create-next-app&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions" rel="noopener noreferrer"&gt;https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows" rel="noopener noreferrer"&gt;https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets" rel="noopener noreferrer"&gt;https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>github</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Host Static website using AWS CDK for Terraform and CloudFront: Part 2</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Fri, 07 Aug 2020 13:21:44 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/host-static-website-using-aws-cdk-for-terraform-and-cloudfront-5bno</link>
      <guid>https://dev.to/thakkaryash94/host-static-website-using-aws-cdk-for-terraform-and-cloudfront-5bno</guid>
      <description>&lt;p&gt;In part 1, we saw how we can host our website using S3. In this part, we will see how we can configure AWS CloudFront to serve our S3 bucket objects as website. In case, if you have not checked out the Part 1, please read this first.&lt;/p&gt;

&lt;p&gt;Let's setup CloudFront distribution using AWS CDK of Terraform. We need to create CloudFront Origin Access Identity(OAI), which we will use for CloudFront and S3 bucket policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CloudfrontOriginAccessIdentity&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./.gen/providers/aws&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="cm"&gt;/*
 * Create am Origin Access Identity
 * Doc link: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/
 * Tutorial link: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/
 */&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cloudfrontOriginAccessIdentity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;CloudfrontOriginAccessIdentity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws_cloudfront_origin_access_identity&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;s3-cloudfront-cdk-example&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, we need to set few required parameters for the CloudFront configuration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dependsOn: Bucket is required to setup CloudFront.&lt;/li&gt;
&lt;li&gt;defaultRootObject: &lt;code&gt;index.html&lt;/code&gt; is our default file that we need to serve.&lt;/li&gt;
&lt;li&gt;customErrorResponse: We can setup custom rules/response for errors like 400, 404, 500, 501 etc.&lt;/li&gt;
&lt;li&gt;origin: 

&lt;ul&gt;
&lt;li&gt;originId: unique id (should be same as targetOriginId)&lt;/li&gt;
&lt;li&gt;domainName: S3 bucket as domain (eg. thakkaryash94-cdk-dev.s3.amazonaws.com)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;defaultCacheBehavior:

&lt;ul&gt;
&lt;li&gt;targetOriginId: unique id (should be same as originId)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;restrictions: We want our website be to accessible from everywhere, so set it to none.&lt;/li&gt;
&lt;li&gt;viewerCertificate: We can use CloudFront default certificate and can also add custom ACM certificate, IAM certificate etc.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CloudfrontDistribution&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./.gen/providers/aws&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;originId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`S3-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cloudFrontDistribution&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;CloudfrontDistribution&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`aws_cloudfront_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;defaultRootObject&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;index.html&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;customErrorResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;errorCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;responseCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;responsePagePath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/index.html&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;origin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;originId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;originId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;domainName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucketDomainName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;s3OriginConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="na"&gt;originAccessIdentity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cloudfrontOriginAccessIdentity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cloudfrontAccessIdentityPath&lt;/span&gt;
    &lt;span class="p"&gt;}]&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;defaultCacheBehavior&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;allowedMethods&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GET&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;HEAD&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;cachedMethods&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GET&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;HEAD&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;forwardedValues&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="na"&gt;cookies&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;forward&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;none&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
      &lt;span class="na"&gt;queryString&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="na"&gt;targetOriginId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;originId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;viewerProtocolPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;allow-all&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;restrictions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;geoRestriction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="na"&gt;restrictionType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;none&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;}]&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;viewerCertificate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;cloudfrontDefaultCertificate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Previously, our bucket was public, it means anyone can access bucket objects using bucket website URL. Now, we have configured CloudFront to serve our website, so it's time to block that access. With this, our website will be only accessible by CloudFront URL only. No-one will be able to access bucket objects using S3 website URL.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;acl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;private&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;            &lt;span class="c1"&gt;// Set bucket ACL as private&lt;/span&gt;
&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;website&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;               &lt;span class="c1"&gt;// Disable website hosting feature&lt;/span&gt;
&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cloudfrontOriginAccessIdentity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"
      },
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/*"
      ]
    }
  ]
}`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Last, we will print the CloudFront url, which will serve our s3 objects as a website.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Output the cloudfront url to access the website&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;TerraformOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cloudfront_website_endpoint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;CloudFront URL&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`https://&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cloudFrontDistribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;domainName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, we follow the same process to deploy the changes as per the Part 1. After successfull deployment, CloudFront public URL will be printed on the console and we will be able to access our website with default https certificate.&lt;/p&gt;

&lt;p&gt;So this is how, we can setup CloudFront with AWS S3 using AWS CDK for Terraform.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/thakkaryash94"&gt;
        thakkaryash94
      &lt;/a&gt; / &lt;a href="https://github.com/thakkaryash94/terraform-cdk-react-example"&gt;
        terraform-cdk-react-example
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Host react website using terraform CDK on AWS S3
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h4&gt;
Host Static website using AWS CDK for Terraform&lt;/h4&gt;
&lt;p&gt;This repo contains the code for &lt;a href="https://dev.to/thakkaryash94/host-static-website-using-aws-cdk-for-terraform-part-1-57ki" rel="nofollow"&gt;DEV.to blog&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;

  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/thakkaryash94/terraform-cdk-react-example"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;h4&gt;
  
  
  Links:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudfront_distribution"&gt;Terraform AWS Prorvider CloudFront Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thakkaryash94/terraform-cdk-react-example"&gt;Terraform CDK React Example GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>Host Static website using AWS CDK for Terraform: Part 1</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Sun, 02 Aug 2020 07:49:14 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/host-static-website-using-aws-cdk-for-terraform-part-1-57ki</link>
      <guid>https://dev.to/thakkaryash94/host-static-website-using-aws-cdk-for-terraform-part-1-57ki</guid>
      <description>&lt;p&gt;There are many ways we can write and host a website like writing plain HTML files, using frameworks like Angular, React, Vue, Gatsby, many more, and hosting it on various services like Netlify, S3, Firebase, Azure, Zeit for free. In this blog, we will see, how we can use AWS CDK and Terraform to host our website on S3 without leaving the terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Cloud Development Kit (AWS CDK)
&lt;/h2&gt;

&lt;p&gt;The AWS-CDK was released and open sourced around May 2018. The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform
&lt;/h2&gt;

&lt;p&gt;Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  CDK for Terraform
&lt;/h2&gt;

&lt;p&gt;Hashicorp published CDK for Terraform with Python and TypeScript support. CDK for Terraform generates Terraform configuration to enable provisioning with Terraform. The adaptor works with any existing provider and modules hosted in the Terraform Registry. The core Terraform workflow remains the same, with the ability to plan changes before applying.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The CDK for Terraform project includes two packages&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“cdktf-cli” - A CLI that allows users to run commands to initialize, import, and synthesize CDK for Terraform applications.&lt;/li&gt;
&lt;li&gt;“cdktf” - A library for defining Terraform resources using programming constructs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Host Static Website
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;p&gt;We will be using Node.js with Typescript to setup the deployment.&lt;/p&gt;

&lt;p&gt;install the CDK for Terraform globally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; cdktf-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Initialize a New Project
&lt;/h3&gt;

&lt;p&gt;Create a directory name &lt;code&gt;deployment&lt;/code&gt; under project folder and initialize a set of TypeScript templates using &lt;code&gt;cdktf init&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; project/deployment
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;project/deployment
&lt;span class="nv"&gt;$ &lt;/span&gt;cdktf init &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;typescript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Enter details regarding the project including Terraform Cloud for storing the project state. You can use the &lt;code&gt;--local&lt;/code&gt; option to continue without using Terraform Cloud for state management.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;We will now setup the project. Please enter the details &lt;span class="k"&gt;for &lt;/span&gt;your project.
If you want to &lt;span class="nb"&gt;exit&lt;/span&gt;, press ^C.

Project Name: &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="s1"&gt;'deployment'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
Project Description: &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="s1"&gt;'A simple getting started project for cdktf.'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will be using a ReactJS project to write our website code for the example. You can use any framework and any programming language you want. Now, let's run below command to create ReactJS project.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; create-react-app
&lt;span class="nv"&gt;$ &lt;/span&gt;create-react-app web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After setting up the React project, run below command to build the React website. Below command will create a &lt;code&gt;build&lt;/code&gt; folder, which contains all the files like HTML, CSS, JS, images etc which are required for the website under web folder.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;web
&lt;span class="nv"&gt;$ &lt;/span&gt;npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now, if we run &lt;code&gt;tree&lt;/code&gt; command on terminal or open the project folder in code editor, we will see our folder structure as below.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;tree
├── deployment
│   ├── cdktf.json
│   ├── cdktf.out
│   ├── &lt;span class="nb"&gt;help&lt;/span&gt;
│   ├── main.d.ts
│   ├── main.js
│   ├── main.ts
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
│   ├── terraform.tfstate
│   └── tsconfig.json
└── web
    ├── README.md
    ├── build
    ├── node_modules
    ├── package.json
    ├── public
    ├── src
    └── yarn.lock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Open main.ts file from deployment folder. We will be writing in this file as per our requirements and cdktf will read this file and setup Terraform file based on this.&lt;/p&gt;

&lt;p&gt;Let's import AWS Provider class from &lt;strong&gt;.gen&lt;/strong&gt; folder. Next, we need to specify which region we will using for deployment process.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// import required classes from generated folder&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AwsProvider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;S3Bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;S3BucketObject&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./.gen/providers/aws&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// assign AWS region&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AwsProvider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-west-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After setting up the region, we will be creating a S3 bucket with policies.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use &lt;strong&gt;public-read&lt;/strong&gt; for Access Control.&lt;/li&gt;
&lt;li&gt;Enable website static website hosting feature. This is same as &lt;strong&gt;Use this bucket to host a website&lt;/strong&gt; from AWS Console.&lt;/li&gt;
&lt;li&gt;Update policy to make the objects in your bucket publicly readable.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Define AWS S3 bucket name&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;YOUR-WEBSITE-BUCKET-NAME&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Create bucket with public access&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;S3Bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;acl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;public-read&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;website&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;indexDocument&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;index.html&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;errorDocument&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;index.html&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Terraform&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;true&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Environment&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dev&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": [
          "arn:aws:s3:::&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/*"
        ]
      }
    ]
  }`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After setting up the Bucket configuration, it's time to write code, which will upload the files to S3.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read all the files from the build folder&lt;/li&gt;
&lt;li&gt;Create a S3 bucket object for each file

&lt;ol&gt;
&lt;li&gt;dependsOn: this step will wait till the bucket is not created. This is very import parameter.&lt;/li&gt;
&lt;li&gt;key: defines our files and folder structure on S3.&lt;/li&gt;
&lt;li&gt;source: is the actual file we want to upload to S3.&lt;/li&gt;
&lt;li&gt;etag: This is very important and tricky, reason is Terraform only creates or deletes the files. If there is already a file uploaded on S3 with a name and we are changing the content of the file then it won't replace the file. With this parameter, it will replace existing files as well because their etags are changed.&lt;/li&gt;
&lt;li&gt;contentType: This is also very important because this parameter will instruct the browser to open the file as a HTML, image, js, css file. If we don't define this, then when we open our S3 bucket public URL in the browser, the browser will download the &lt;code&gt;index.html&lt;/code&gt; file instead of opening it because without content-type, the browser has no idea what to do with this file.
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// import necessary packages&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;path&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;glob&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;glob&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;mime&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mime-types&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Get all the files from build folder, skip directories&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../web/build/**/*&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;absolute&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;nodir&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Create bucket object for each file&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;S3BucketObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`aws_s3_bucket_object_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basename&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;            &lt;span class="c1"&gt;// Wait untill the bucket is not created&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`../web/build/`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;       &lt;span class="c1"&gt;// Using relative path for folder structure on S3&lt;/span&gt;
    &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;          &lt;span class="c1"&gt;// Using absolute path to upload&lt;/span&gt;
    &lt;span class="na"&gt;etag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extname&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;       &lt;span class="c1"&gt;// Set the content-type for each object&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Last step is to output the bucket public URL. So after successful execution, we will get the public URL on our terminal. We just need to copy and paste it on browser and we will be able to view our website.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// import required class to print the output&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TerraformOutput&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cdktf&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Output the bucket url to access the website&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TerraformOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;website_endpoint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`http://&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;websiteEndpoint&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;So now, out script is ready, it's time to Synthesize TypeScript to Terraform Configuration.&lt;/p&gt;
&lt;h4&gt;
  
  
  Synthesize TypeScript to Terraform Configuration
&lt;/h4&gt;

&lt;p&gt;Let's synthesize TypeScript to Terraform configuration by running &lt;code&gt;cdktf synth&lt;/code&gt;. The command generates &lt;a href="https://www.terraform.io/docs/configuration/syntax-json.html" rel="noopener noreferrer"&gt;Terraform JSON configuration&lt;/a&gt; files in the &lt;code&gt;cdktf.out&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;deployment
&lt;span class="nv"&gt;$ &lt;/span&gt;cdktf synth
Generated Terraform code &lt;span class="k"&gt;in &lt;/span&gt;the output directory: cdktf.out

&lt;span class="nv"&gt;$ &lt;/span&gt;tree cdktf.out
cdktf.out
└── cdk.tf.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Inspect the generated Terraform JSON file by examining cdktf.out/cdk.tf.json. It includes the Terraform configuration for the S3 and S3 bucket objects which looks like below.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"aws_s3_bucket"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"typescriptaws_awss3bucket_D835B1D8"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"acl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"public-read"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"bucket"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"thakkaryash94-cdk-dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"policy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;        &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Version&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;2012-10-17&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;        &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Statement&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: [&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;          {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;            &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Sid&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;PublicReadGetObject&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;            &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Effect&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Allow&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;            &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Principal&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;            &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Action&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: [&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;              &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;s3:GetObject&lt;/span&gt;&lt;span class="se"&gt;\"\n&lt;/span&gt;&lt;span class="s2"&gt;            ],&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;            &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Resource&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: [&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;              &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:s3:::thakkaryash94-cdk-dev/*&lt;/span&gt;&lt;span class="se"&gt;\"\n&lt;/span&gt;&lt;span class="s2"&gt;            ]&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;          }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;        ]&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;      }"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Terraform"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Environment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dev"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"website"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"index_document"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"index.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"error_document"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"index.html"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"//"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"typescript-aws/aws_s3_bucket"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"uniqueId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"typescriptaws_awss3bucket_D835B1D8"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"stackTrace"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new TerraformElement (/Users/yash/github_workspace/typescript-aws/deployment/node_modules/cdktf/lib/terraform-element.js:10:19)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new TerraformResource (/Users/yash/github_workspace/typescript-aws/deployment/node_modules/cdktf/lib/terraform-resource.js:9:9)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new S3Bucket (/Users/yash/github_workspace/typescript-aws/deployment/.gen/providers/aws/s3-bucket.js:13:9)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new MyStack (/Users/yash/github_workspace/typescript-aws/deployment/main.js:18:24)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Object.&amp;lt;anonymous&amp;gt; (/Users/yash/github_workspace/typescript-aws/deployment/main.js:66:1)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Module._compile (internal/modules/cjs/loader.js:1185:30)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Module.load (internal/modules/cjs/loader.js:1034:32)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Function.Module._load (internal/modules/cjs/loader.js:923:14)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"internal/main/run_main_module.js:17:47"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"aws_s3_bucket_object"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"typescriptaws_awss3bucketobjectindexhtml_E7345193"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"bucket"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"thakkaryash94-cdk-dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"content_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"text/html; charset=utf-8"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"etag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1596349930423"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"index.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/Users/yash/github_workspace/typescript-aws/web/build/index.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"depends_on"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="s2"&gt;"aws_s3_bucket.typescriptaws_awss3bucket_D835B1D8"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"//"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"typescript-aws/aws_s3_bucket_object_index.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"uniqueId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"typescriptaws_awss3bucketobjectindexhtml_E7345193"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"stackTrace"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new TerraformElement (/Users/yash/github_workspace/typescript-aws/deployment/node_modules/cdktf/lib/terraform-element.js:10:19)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new TerraformResource (/Users/yash/github_workspace/typescript-aws/deployment/node_modules/cdktf/lib/terraform-resource.js:9:9)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new S3BucketObject (/Users/yash/github_workspace/typescript-aws/deployment/.gen/providers/aws/s3-bucket-object.js:13:9)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"new MyStack (/Users/yash/github_workspace/typescript-aws/deployment/main.js:50:13)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Object.&amp;lt;anonymous&amp;gt; (/Users/yash/github_workspace/typescript-aws/deployment/main.js:66:1)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Module._compile (internal/modules/cjs/loader.js:1185:30)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Module.load (internal/modules/cjs/loader.js:1034:32)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Function.Module._load (internal/modules/cjs/loader.js:923:14)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"internal/main/run_main_module.js:17:47"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We can also print Terraform JSON configuration in their terminal using &lt;code&gt;cdktf synth --json&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;After synthesis, we can use the Terraform workflow of initializing, planning, and applying changes within the &lt;code&gt;cdktf.out&lt;/code&gt; working directory or use the CDK for Terraform CLI to run &lt;code&gt;cdktf deploy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Terraform workflow is as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;cdktf.out
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform init
Terraform has been successfully initialized!
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform plan
&lt;span class="c"&gt;# omitted for clarity&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply
aws_s3_bucket.typescriptaws_awss3bucket_D835B1D8: Creating...
aws_s3_bucket.typescriptaws_awss3bucket_D835B1D8: Creation &lt;span class="nb"&gt;complete &lt;/span&gt;after 25s &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;thakkaryash94-cdk-dev]
aws_s3_bucket_object.typescriptaws_awss3bucketobjectindexhtml_E7345193: Creating...
&lt;span class="c"&gt;# omitted for clarity&lt;/span&gt;
Apply &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; Resources: 20 added, 0 changed, 0 destroyed.

Outputs:

typescriptaws_websiteendpoint_D74DC454 &lt;span class="o"&gt;=&lt;/span&gt; http://thakkaryash94-cdk-dev.s3-website-us-west-1.amazonaws.com
&lt;span class="c"&gt;# destroy resources&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform destroy
Plan: 0 to add, 0 to change, 20 to destroy.
Destroy &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; Resources: 20 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is how we can deploy our ReactJS website to AWS S3 using AWS CDK for Terraform.Here is the sample repo for the reference.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/thakkaryash94" rel="noopener noreferrer"&gt;
        thakkaryash94
      &lt;/a&gt; / &lt;a href="https://github.com/thakkaryash94/terraform-cdk-react-example" rel="noopener noreferrer"&gt;
        terraform-cdk-react-example
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Host react website using terraform CDK on AWS S3
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;Host Static website using AWS CDK for Terraform&lt;/h4&gt;

&lt;/div&gt;

&lt;p&gt;This repo contains the code for &lt;a href="https://dev.to/thakkaryash94/host-static-website-using-aws-cdk-for-terraform-part-1-57ki" rel="nofollow"&gt;DEV.to blog&lt;/a&gt;&lt;/p&gt;

&lt;/div&gt;
&lt;br&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/thakkaryash94/terraform-cdk-react-example" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.hashicorp.com/blog/cdk-for-terraform-enabling-python-and-typescript-support/" rel="noopener noreferrer"&gt;CDK for Terraform: Enabling Python &amp;amp; TypeScript Support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/thakkaryash94/terraform-cdk-react-example" rel="noopener noreferrer"&gt;Terraform CDK React Example GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>s3</category>
      <category>devops</category>
    </item>
    <item>
      <title>Many ways to build a container image</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Mon, 18 May 2020 17:59:46 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/how-many-ways-to-build-a-container-image-4g3p</link>
      <guid>https://dev.to/thakkaryash94/how-many-ways-to-build-a-container-image-4g3p</guid>
      <description>&lt;p&gt;Whenever we think about building or running a container, the first tool that comes to our mind is Docker and it's perfectly fine. Docker has made our life way easier than we can think of. But there are other ways as well, we can build and run the image as well. So now the question comes to the mind is, how is it possible for Kubernetes, OpenStack, ECS, DC/OS to understand the build system of every different way we can build the image. Turns out, they don't need to. Because there is a specification on what should be the Image format of the container.  That's why OCI(Open Container Initiative) is formed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.opencontainers.org/"&gt;Open Container Initiative (OCI)&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Open Container Initiative (OCI) is a lightweight, open governance structure (project), formed under the auspices of the Linux Foundation, for the express purpose of creating open industry standards around container formats and runtime. The OCI was launched on June 22nd, 2015 by Docker, CoreOS, and other leaders in the container industry. Read more &lt;a href="https://www.opencontainers.org/about"&gt;here&lt;/a&gt;.&lt;br&gt;
We will be building nodejs-express server using the below tools. Now, Let's get started with the list.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://github.com/containers/buildah"&gt;Buildah&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Buildah is a command-line tool for building Open Container Initiative-compatible (that means Docker and Kubernetes-compatible, too) images quickly and easily. Buildah is easy to incorporate into scripts and build pipelines, and best of all, it doesn't require a running container daemon to build its image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;buildah build-using-dockerfile &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc link: &lt;a href="https://github.com/containers/buildah/blob/master/docs/buildah-bud.md"&gt;https://github.com/containers/buildah/blob/master/docs/buildah-bud.md&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://github.com/moby/buildkit"&gt;BuildKit&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive, and repeatable manner. BuildKit is composed of the &lt;code&gt;buildkitd&lt;/code&gt; daemon and the &lt;code&gt;buildctl&lt;/code&gt; client. While the &lt;code&gt;buildctl&lt;/code&gt; client is available for Linux, macOS, and Windows, the &lt;code&gt;buildkitd&lt;/code&gt; daemon is only available for Linux currently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;buildctl build &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--frontend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dockerfile.v0 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nv"&gt;context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nv"&gt;dockerfile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc Link: &lt;a href="https://github.com/moby/buildkit#building-a-dockerfile-with-buildctl"&gt;https://github.com/moby/buildkit#building-a-dockerfile-with-buildctl&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://buildpacks.io"&gt;Buildpacks&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Buildpacks were first conceived by Heroku in 2011. Since then, they have been adopted by Cloud Foundry and other PaaS such as GitLab, Knative, Deis, Dokku, and Drie. Buildpacks embrace modern container standards, such as the OCI image format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;express-sample&amp;gt; pack suggest-builders
Suggested builders:
  Cloud Foundry:     cloudfoundry/cnb:bionic         Ubuntu bionic base image with buildpacks &lt;span class="k"&gt;for &lt;/span&gt;Java, NodeJS and Golang
  Cloud Foundry:     cloudfoundry/cnb:cflinuxfs3     cflinuxfs3 base image with buildpacks &lt;span class="k"&gt;for &lt;/span&gt;Java, .NET, NodeJS, Golang, PHP, HTTPD and NGINX
  Cloud Foundry:     cloudfoundry/cnb:tiny           Tiny base image &lt;span class="o"&gt;(&lt;/span&gt;bionic build image, distroless run image&lt;span class="o"&gt;)&lt;/span&gt; with buildpacks &lt;span class="k"&gt;for &lt;/span&gt;Golang
  Heroku:            heroku/buildpacks:18            heroku-18 base image with buildpacks &lt;span class="k"&gt;for &lt;/span&gt;Ruby, Java, Node.js, Python, Golang, &amp;amp; PHP

Tip: Learn more about a specific builder with:
  pack inspect-builder &lt;span class="o"&gt;[&lt;/span&gt;builder image]

pack build express-pack-app &lt;span class="nt"&gt;--path&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--builder&lt;/span&gt; heroku/buildpacks:18
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc Link: &lt;a href="https://buildpacks.io/docs/app-developer-guide/build-an-app/"&gt;https://buildpacks.io/docs/app-developer-guide/build-an-app/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://github.com/genuinetools/img"&gt;img&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;img&lt;/code&gt; is more cache-efficient than Docker and can also execute multiple build stages concurrently, as it internally uses &lt;a href="https://github.com/moby/buildkit"&gt;BuildKit&lt;/a&gt;'s DAG solver.&lt;/p&gt;

&lt;p&gt;The commands/UX are the same as &lt;code&gt;docker {build,tag,push,pull,login,logout,save}&lt;/code&gt; so all you have to do is replace &lt;code&gt;docker&lt;/code&gt; with &lt;code&gt;img&lt;/code&gt; in your scripts, command line, and/or life.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;img build &lt;span class="nt"&gt;-t&lt;/span&gt; r.j3ss.co/img &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc link: &lt;a href="https://github.com/genuinetools/img#build-an-image"&gt;https://github.com/genuinetools/img#build-an-image&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://github.com/GoogleContainerTools/kaniko"&gt;Kaniko&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. Kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/.config/gcloud:/root/.config/gcloud &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; /path/to/context:/workspace &lt;span class="se"&gt;\&lt;/span&gt;
    gcr.io/kaniko-project/executor:latest &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--dockerfile&lt;/span&gt; /workspace/Dockerfile &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--destination&lt;/span&gt; &lt;span class="s2"&gt;"gcr.io/&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$IMAGE_NAME&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--context&lt;/span&gt; &lt;span class="nb"&gt;dir&lt;/span&gt;:///workspace/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc link: &lt;a href="https://github.com/GoogleContainerTools/kaniko"&gt;https://github.com/GoogleContainerTools/kaniko&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://podman.io/"&gt;Podman&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Podman specializes in all of the commands and functions that help you to maintain and modify OCI images, such as pulling and tagging. It also allows you to create, run, and maintain those containers created from those images. For building container images via Dockerfiles, Podman uses Buildah's golang API and can be installed independently from Buildah.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;podman build &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc link: &lt;a href="https://docs.podman.io/en/latest/markdown/podman-build.1.html#examples"&gt;https://docs.podman.io/en/latest/markdown/podman-build.1.html#examples&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://pouchcontainer.io"&gt;PouchContainer&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;PouchContainer is a highly reliable container engine open sourced by Alibaba. It is an excellent software layer to fill up the gap between business applications and underlying infrastructure. The strong-isolation ability and rich container are its representative features.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pouch build &lt;span class="o"&gt;[&lt;/span&gt;OPTION] PATH
Option:
      &lt;span class="nt"&gt;--addr&lt;/span&gt; string             buildkitd address &lt;span class="o"&gt;(&lt;/span&gt;default &lt;span class="s2"&gt;"unix:///run/buildkit/buildkitd.sock"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
      &lt;span class="nt"&gt;--build-arg&lt;/span&gt; stringArray   Set build-time variables
  &lt;span class="nt"&gt;-h&lt;/span&gt;, &lt;span class="nt"&gt;--help&lt;/span&gt;                    &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;build
  &lt;span class="nt"&gt;-t&lt;/span&gt;, &lt;span class="nt"&gt;--tag&lt;/span&gt; stringArray         Name and optionally a tag &lt;span class="k"&gt;in &lt;/span&gt;the &lt;span class="s1"&gt;'name:tag'&lt;/span&gt; format
      &lt;span class="nt"&gt;--target&lt;/span&gt; string           Set the target build stage to build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc link: &lt;a href="https://pouchcontainer.io/#/pouch/docs/commandline/pouch_build.md"&gt;https://pouchcontainer.io/#/pouch/docs/commandline/pouch_build.md&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://ocibuilder.github.io/docs"&gt;OCIBuilder&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;ocibuilder&lt;/strong&gt; offers a command line tool called the &lt;strong&gt;ocictl&lt;/strong&gt; to build, push and pull &lt;a href="https://www.opencontainers.org/"&gt;OCI&lt;/a&gt; compliant images through declarative specifications, allowing you to pick between &lt;a href="https://github.com/containers/buildah"&gt;Buildah&lt;/a&gt; or &lt;a href="https://docs.docker.com/"&gt;Docker&lt;/a&gt; as the container build tool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ocictl init
ocictl build        &lt;span class="c"&gt;# using docker&lt;/span&gt;
ocictl build &lt;span class="nt"&gt;--builder&lt;/span&gt; buildah      &lt;span class="c"&gt;# using buildah&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Doc link: &lt;a href="https://ocibuilder.github.io/docs/quickstart/"&gt;https://ocibuilder.github.io/docs/quickstart/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see here, there are many ways we can build an image and run it practically anywhere we want. Comment down below if you any other ways we can build it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Help Links:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.docker.com"&gt;https://www.docker.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ocibuilder.github.io/docs/quickstart"&gt;https://ocibuilder.github.io/docs/quickstart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/opencontainers/image-spec"&gt;https://github.com/opencontainers/image-spec&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>container</category>
      <category>build</category>
      <category>image</category>
    </item>
    <item>
      <title>Docker centralized logging using Fluent Bit, Grafana and Loki</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Wed, 13 May 2020 14:26:45 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/docker-container-logs-using-fluentd-and-grafana-loki-a15</link>
      <guid>https://dev.to/thakkaryash94/docker-container-logs-using-fluentd-and-grafana-loki-a15</guid>
      <description>&lt;p&gt;When running microservices as containers, monitoring becomes very complex and difficult. That's where Prometheus, Grafana come to the rescue. Prometheus collects the metrics data and Grafana helps us to convert those metrics into beautiful visuals. Grafana allows you to query, visualize, and create an alert on metrics, no matter where they are stored. We can visualize metrics like CPU usage, memory usage, containers count, and much more. But there are few things that we can't visualize like container logs, it needs to be in tabular format with text data. For that, we can setup EFK (Elasticsearch + Fluentd + Kibana) stack, so Fluentd will collect logs from a docker container and forward it to Elasticsearch and then we can search logs using Kibana.&lt;/p&gt;

&lt;p&gt;Grafana team has released Loki, which is inspired by Prometheus to solve this issue. So now, we don't need to manage multiple stacks to monitor the running systems like Grafana and Prometheus to monitor and EFK to check the logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Grafana Loki
&lt;/h3&gt;

&lt;p&gt;Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost-effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. It uses labels from the log data to query.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fluent Bit
&lt;/h3&gt;

&lt;p&gt;Fluent Bit is an open-source and multi-platform Log Processor and Forwarder which allows you to collect data/logs from different sources, unify and send them to multiple destinations. It's fully compatible with Docker and Kubernetes environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Fluentd logging driver
&lt;/h3&gt;

&lt;p&gt;The Fluentd logging driver sends container logs to the Fluentd collector as structured log data. Then, users can use any of the various output plugins of Fluentd to write these logs to various destinations.&lt;/p&gt;

&lt;p&gt;We are going to use Fluent Bit to collect the Docker container logs and forward it to Loki and then visualize the logs on Grafana in tabular View.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;We need to setup grafana, loki and fluent/fluent-bit to collect the Docker container logs using fluentd logging driver. Clone the sample project from &lt;a href="https://github.com/thakkaryash94/docker-grafana-loki-fluent-bit-sample" rel="noopener noreferrer"&gt;here&lt;/a&gt;. It contains the below files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker-compose-grafana.yml&lt;/li&gt;
&lt;li&gt;docker-compose-fluent-bit.yml&lt;/li&gt;
&lt;li&gt;fluent-bit.conf&lt;/li&gt;
&lt;li&gt;docker-compose-app.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can combine all the yml files into one but I like it separated by the service group, more like kubernetes yml files. Let's see, what we have in those files.&lt;/p&gt;

&lt;p&gt;Before running docker services, we need to create an external network &lt;code&gt;loki&lt;/code&gt; because our services are in different files, so they will communicate in this network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker network create loki
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;docker-compose-grafana.yml&lt;/strong&gt;&lt;br&gt;
This file contains Grafana, Loki, and renderer services. run &lt;code&gt;docker-compose -f docker-compose-grafana.yml up -d&lt;/code&gt;. This will start 3 containers, grafana, renderer, and Loki, we will use grafana dashboard for the visualization and loki to collect data from fluent-bit service. Now, go to &lt;a href="http://localhost:3000/" rel="noopener noreferrer"&gt;http://localhost:3000/&lt;/a&gt; and you will be able to access the Grafana Dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;docker-compose-fluent-bit.yml&lt;/strong&gt;&lt;br&gt;
We will be using &lt;code&gt;grafana/fluent-bit-plugin-loki:latest&lt;/code&gt; image instead of a fluent-bit image to collect Docker container logs because it contains Loki plugin which will send container logs to Loki service. For that, we need to pass &lt;code&gt;LOKI_URL&lt;/code&gt; environment variable to the container and also mounting &lt;code&gt;fluent-bit.conf&lt;/code&gt; as well for custom configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;fluent-bit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/fluent-bit-plugin-loki:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;LOKI_URL=http://loki:3100/loki/api/v1/push&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;fluent-bit.conf&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This file contains fluent-bit configuration. Here, for input, we are listening on 0.0.0.0:24224 port and forwarding whatever we are getting to output plugins. We are setting a few Loki configs like LabelKeys, LineFormat, LogLevel, Url. The main key is LabelKeys, using this, we will be able to see the container logs, to make it dynamic, we are setting it so &lt;code&gt;container_name&lt;/code&gt;, which means when we will be running our services, we need to pass &lt;code&gt;container_name&lt;/code&gt; in docker-compose file, using that name, we will be able to search and differentiate container logs. We can add as many LabelKeys as we want with a comma(',').&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[INPUT]
    Name        forward
    Listen      0.0.0.0
    Port        24224
[Output]
    Name loki
    Match *
    Url ${LOKI_URL}
    RemoveKeys source
    Labels {job="fluent-bit"}
    LabelKeys container_name
    BatchWait 1
    BatchSize 1001024
    LineFormat json
    LogLevel info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's run it with &lt;code&gt;docker-compose -f docker-compose-fluent-bit.yml up -d&lt;/code&gt;. This will start fluent-bit container, which will collect the docker container logs and everything that is printed using stdout and forward it to loki service using loki plugin.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;docker-compose-app.yml&lt;/strong&gt;&lt;br&gt;
It contains the actual application/server image service. You can ignore this file but we have to add below config in our server to forward container logs to the fluent-bit container. Important parts of the configuration are container_name, logging. &lt;code&gt;container_name&lt;/code&gt; is the one we will use to filter the container logs from the Grafana Dashboard. In fluent-address, set your fluent-bit host IP address, if you are running locally, it will be your PC ip address.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;express-app&lt;/span&gt;
    &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluentd&lt;/span&gt;
      &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fluentd-address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FLUENT_BIT_ADDRESS:24224&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, everything is up and running. Let's generate some logs, if you are running docker-compose-app.yml file, then go to &lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt; and refresh few times, go to &lt;a href="http://localhost:4000/test" rel="noopener noreferrer"&gt;http://localhost:4000/test&lt;/a&gt;, this will generate some logs.&lt;/p&gt;

&lt;p&gt;So from docker container, logs will be sent to fluent-bit container, which will forward them to the Loki container using the Loki plugin. Now, we need to add Loki in Grafana data source, so that Grafana will be able to fetch the logs from Loki and we will be able to see it on the dashboard.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup Grafana Dashboard
&lt;/h4&gt;

&lt;p&gt;To see the logs on Grafana dashboard, you can follow &lt;a href="https://youtu.be/qE6hEHNH9dE?t=73" rel="noopener noreferrer"&gt;YouTube video&lt;/a&gt; or below steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the browser and go to &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;, use default values &lt;code&gt;admin&lt;/code&gt; and &lt;code&gt;admin&lt;/code&gt; for username and password.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, go to &lt;a href="http://localhost:3000/datasources" rel="noopener noreferrer"&gt;http://localhost:3000/datasources&lt;/a&gt; and select &lt;code&gt;Loki&lt;/code&gt; from &lt;code&gt;Logging and document databases&lt;/code&gt; section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter &lt;code&gt;http://loki:3100&lt;/code&gt; in URL under &lt;code&gt;HTTP&lt;/code&gt; section. We can do this because we are running Loki and Grafana in the same network &lt;code&gt;loki&lt;/code&gt; else you have to enter host IP address and port here, click on &lt;code&gt;Save and Test&lt;/code&gt; button from the bottom of the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fthakkaryash94%2Fdocker-grafana-loki-fluent-bit-sample%2Fmaster%2Fdocs%2Fimg%2Fdatasource.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fthakkaryash94%2Fdocker-grafana-loki-fluent-bit-sample%2Fmaster%2Fdocs%2Fimg%2Fdatasource.png" alt="Grafana Data Source"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, go to 3rd tab &lt;code&gt;Explore&lt;/code&gt; from the left sidebar or &lt;a href="http://localhost:3000/explore" rel="noopener noreferrer"&gt;http://localhost:3000/explore&lt;/a&gt;, click on &lt;code&gt;Log Labels&lt;/code&gt; dropdown, here you will see &lt;code&gt;container_name&lt;/code&gt; and &lt;code&gt;job&lt;/code&gt; labels, these are same labels that we have mentioned in the &lt;code&gt;fluent-bit.conf&lt;/code&gt; file with &lt;code&gt;LabelKeys&lt;/code&gt; key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on &lt;code&gt;container_name&lt;/code&gt;, now, you should see our app service container name in the next step else type &lt;code&gt;{container_name="express-app"}&lt;/code&gt; in the Loki query search. Click on that and that's it, now you should be able to see the container logs, these are the logs that we generated after starting up our app service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fthakkaryash94%2Fdocker-grafana-loki-fluent-bit-sample%2Fmaster%2Fdocs%2Fimg%2Fexplore.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fthakkaryash94%2Fdocker-grafana-loki-fluent-bit-sample%2Fmaster%2Fdocs%2Fimg%2Fexplore.png" alt="Grafana Explore"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, we can tweak the view add this to our Grafana Dashboard and that's it.&lt;/p&gt;

&lt;p&gt;So, like this, we have setup fluentd-grafana-loki stack to collect and view the container logs on Grafana Dashboard.&lt;/p&gt;

&lt;p&gt;Below are the links that I have mentioned in the blog, that will help you to setup the stack.&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/thakkaryash94/docker-grafana-loki-fluent-bit-sample" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://youtu.be/qE6hEHNH9dE?t=73" rel="noopener noreferrer"&gt;Getting started with Grafana Loki - under 4 minutes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/grafana/loki/tree/master/cmd/fluent-bit" rel="noopener noreferrer"&gt;Loki GitHub Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/grafana/loki/tree/master/cmd/fluent-bit" rel="noopener noreferrer"&gt;Loki Fluent Bit GitHub Repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>grafana</category>
      <category>docker</category>
      <category>fluentd</category>
      <category>logs</category>
    </item>
    <item>
      <title>CockroachDB auto-backup with Docker</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Mon, 04 May 2020 22:09:24 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/cockroachdb-auto-backup-with-docker-1m5c</link>
      <guid>https://dev.to/thakkaryash94/cockroachdb-auto-backup-with-docker-1m5c</guid>
      <description>&lt;p&gt;CockroachDB is one of the most popular cloud-native databases. CockroachDB is an ACID compliant, relational database that’s wire compatible with PostgreSQL. CockroachDB delivers full ACID transactions at scale even in a distributed environment and guarantees serializable isolation in a cloud-neutral distributed database. We can deploy it using Docker and Kubernetes without any issue. CockroachDB delivers on the key cloud-native primitives of horizontal scale, no single points of failure, survivability, automatable operations, and no platform-specific encumbrances.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to run CockroachDB?
&lt;/h3&gt;

&lt;p&gt;We can run CockroachDB using a single executable binary or using Docker and Kubernetes. We will see how to run CockroachDB in Docker and how to take backup as well. You can follow the blog or &lt;a href="%5BStart%20a%20Cluster%20in%20Docker%20(Insecure)%20%7C%20CockroachDB%20Docs%5D(https://www.cockroachlabs.com/docs/stable/start-a-local-cluster-in-docker-mac.html)"&gt;Docs&lt;/a&gt; and follow the backup instructions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Start CockroachDB using Docker
&lt;/h4&gt;

&lt;p&gt;We can run a single or multi node cluster with Docker but for development, we will be using single node cluster only.&lt;/p&gt;

&lt;p&gt;Below command will pull the &lt;code&gt;latest&lt;/code&gt; CockroachDB image from Docker hub, bind 26257, 8080 ports, bind the volume with cockroach_data/roach1. Run the command and go to &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt;, you will be able to see the CockroachDB Dashboard.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;roach1 &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--hostname&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;roach1 &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-p&lt;/span&gt; 26257:26257 &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080  &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/cockroach_data/roach1:/cockroach/cockroach-data"&lt;/span&gt;  &lt;span class="se"&gt;\&lt;/span&gt;
      cockroachdb/cockroach:latest start &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--insecure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, our CockroachDB database server is up and running. Now, Let's create some data using workload init command or you can dump existing database as well. Run below commands to do that. Here is the &lt;a href="https://www.cockroachlabs.com/docs/stable/learn-cockroachdb-sql.html"&gt;docs&lt;/a&gt; explaining in details what we are doing in below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; roach1 bash

./cockroach workload init movr &lt;span class="s1"&gt;'postgresql://root@localhost:26257?sslmode=disable'&lt;/span&gt;
./cockroach sql &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;--host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost:26257
USE movr&lt;span class="p"&gt;;&lt;/span&gt;
SHOW tables&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can understand,  it will create a database &lt;code&gt;movr&lt;/code&gt; with some tables and records. You should be able to see below output on your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;          table_name
+----------------------------+
  promo_codes
  rides
  user_promo_codes
  &lt;span class="nb"&gt;users
  &lt;/span&gt;vehicle_location_histories
  vehicles
&lt;span class="o"&gt;(&lt;/span&gt;6 rows&lt;span class="o"&gt;)&lt;/span&gt;

Time: 9.579561ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means, we have successfully generated demo data and it's time to take a backup of it. Run &lt;code&gt;exit&lt;/code&gt; 2 times to get out of the container. 1st exit command to get out of cockroach sql command line and 2nd is to exit from the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  CockroachDB Database backup
&lt;/h3&gt;

&lt;p&gt;Our database is ready, now it's time to take a backup of it. To do that, there are 2 ways, we can take a backup of it.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. BACKUP
&lt;/h4&gt;

&lt;p&gt;CockroachDB's BACKUP statement allows you to create full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp. Backups can be with or without revision history.&lt;/p&gt;

&lt;p&gt;There are many advantages of this process. We can setup whether we want to take a full backup or Incremental backup, automate backup with &lt;a href="https://www.cockroachlabs.com/docs/stable/backup.html#viewing-and-controlling-backups-jobs"&gt;JOBS&lt;/a&gt;, upload it to Amazon, Azure, Google Cloud, NFS or any S3-compatible services, backup a single table or view and many more. You can read more on it &lt;a href="https://www.cockroachlabs.com/docs/stable/backup.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This happens inside cockroachDB container environment, so cockroachDB has full control over it.&lt;/p&gt;

&lt;p&gt;The only disadvantage is this is available only for &lt;a href="https://www.cockroachlabs.com/product/cockroachdb/"&gt;enterprise&lt;/a&gt; users. This means that, if we are running cockroachDB locally or on a small server, where we may not want enterprise support, we can't use this feature. We can CockroachCloud, if we are running it on a small scale and planning to scale it in the future. CockroachCloud provides a Fully hosted and managed, Self-service platform with Enterprise features and basic support. You can read more &lt;a href="https://www.cockroachlabs.com/pricing/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So how can we do it without BACKUP feature, which is available only for &lt;code&gt;CockroachCloud&lt;/code&gt; and &lt;code&gt;CockroachDB Enterprise&lt;/code&gt; users.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. cockroach dump
&lt;/h4&gt;

&lt;p&gt;CockroachDB provides &lt;code&gt;dump&lt;/code&gt; command, which is similar to pg_dump. The cockroach dump command outputs the SQL statements required to recreate tables, views, and sequences. This command can be used to back up or export each database in a cluster. The output should also be suitable for importing into other relational databases, with minimal adjustments. You can read more &lt;a href="%5Bcockroach%20dump%20%7C%20CockroachDB%20Docs%5D(https://www.cockroachlabs.com/docs/stable/cockroach-dump.html)"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That's great, so now, all we need to do it run a cron that executes &lt;code&gt;cockroach dump&lt;/code&gt; whenever we want and that's it.&lt;/p&gt;

&lt;p&gt;But, there are many things we have to think about. Like, how and where we are going to store our backup, how to take a backup on-demand etc etc.&lt;/p&gt;

&lt;p&gt;Exactly, for that, I have created an open source docker image with the above features. So let's go through it one by one.&lt;/p&gt;

&lt;h5&gt;
  
  
  Features
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Customize cron with CRON_SCHEDULE env. &lt;a href="https://godoc.org/github.com/robfig/cron"&gt;https://godoc.org/github.com/robfig/cron&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Manual backup at any time&lt;/li&gt;
&lt;li&gt;Optional backup AWS S3/Spaces upload, if you provide ACCESS_KEY_ID, then it will take it as you want backup to uploaded on S3 or Spaces or anywhere compatible with s3 API.&lt;/li&gt;
&lt;li&gt;All cockroach image env variable support, you can override COCKROACH_USER, COCKROACH_INSECURE etc. &lt;a href="https://www.cockroachlabs.com/docs/v19.2/cockroach-dump.html#client-connection"&gt;docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;It exposes /data as volume, which contains backup zip file, so we can use backup from here if we don't want to upload it to any S3 services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Environment Variables
&lt;/h5&gt;

&lt;h6&gt;
  
  
  Required:
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;COCKROACH_DATABASE: Database name&lt;/li&gt;
&lt;li&gt;CRON_SCHEDULE: Cron value in double quotes. &lt;a href="https://godoc.org/github.com/robfig/cron"&gt;https://godoc.org/github.com/robfig/cron&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  Optional:
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;ACCESS_KEY_ID: Spaces access key id&lt;/li&gt;
&lt;li&gt;BUCKET_NAME: Spaces bucket name&lt;/li&gt;
&lt;li&gt;S3_URL: AWS S3(s3.ap-south-1.amazonaws.com) or DO Spaces(nyc3.digitaloceanspaces.com)&lt;/li&gt;
&lt;li&gt;SECRET_ACCESS_KEY: Spaces secret access key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let's look at how we can run our docker image to take a backup. Below, is the sample, how we can run the Docker container, which will take a backup of our movr database everyday and upload it to S3 service like AWS S3, Digital Ocean Spaces etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--name&lt;/span&gt; cockroach-backup &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/data:/data &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/cockroach-certs:/cockroach-certs &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ACCESS_KEY_ID &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;BUCKET_NAME &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;S3_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;S3_URL &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SECRET_ACCESS_KEY &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;COCKROACH_DATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;movr &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;COCKROACH_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;CRON_SCHEDULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"0 0 * * *"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;COCKROACH_INSECURE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;COCKROACH_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root
      docker.pkg.github.com/thakkaryash94/docker-cockroachdb-backup/docker-cockroachdb-backup:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the above command, we will run our backup container name &lt;code&gt;cockroach-backup&lt;/code&gt; with &lt;code&gt;/data&lt;/code&gt; volume, which will contain all the backup files and with ACCESS_KEY_ID, we can upload it to wherever we want. It supports every client connection params.&lt;/p&gt;

&lt;p&gt;Now, let's say, you want to take a database backup of the current moment, you can run a below command to trigger manual backup as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:9000/backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These features are already available and few more already on the list. Feel free to open an issue to add more features.&lt;/p&gt;

&lt;p&gt;This is just a docker image so it works with Kubernetes as well.&lt;/p&gt;

&lt;p&gt;Upcoming features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flags support&lt;/li&gt;
&lt;li&gt;Multiple database backup support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is an open source project with MIT. This is my first golang project, I am a newbie, so I may have made mistakes. Issues and pull requests are most welcome.&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/thakkaryash94/docker-cockroachdb-backup"&gt;Docker CockroachDB Backup GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cockroachlabs.com/docs/stable/learn-cockroachdb-sql.html"&gt;Learn CockroachDB SQL | CockroachDB Docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cockroachdb</category>
      <category>docker</category>
      <category>backup</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes auto-deployment using Okteto, Skaffold &amp; GitLab CI/CD</title>
      <dc:creator>Yash Thakkar</dc:creator>
      <pubDate>Mon, 03 Feb 2020 19:08:20 +0000</pubDate>
      <link>https://dev.to/thakkaryash94/kubernetes-auto-deployment-using-okteto-skaffold-gitlab-ci-cd-c84</link>
      <guid>https://dev.to/thakkaryash94/kubernetes-auto-deployment-using-okteto-skaffold-gitlab-ci-cd-c84</guid>
      <description>&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;CI/CD is the process that never ends. Previously, we used to auto-deploy our applications in VMs by writing scripts that ssh into the remote server and deploy it. Then containers arrived, we wrapped our code in containers and writing scripts that build docker container and deploy on servers by stopping existing services and starting up new images for the services. For K8s, things are a little different. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;kubectl is the new ssh.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://twitter.com/kelseyhightower/status/1070413458045202433" rel="noopener noreferrer"&gt;https://twitter.com/kelseyhightower/status/1070413458045202433&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/kelseyhightower" rel="noopener noreferrer"&gt;@kelseyhightower&lt;/a&gt; said this. In k8s, we don't care about our instances or infrastructure. We perform all the operations using kubectl, like deploying, upgrading, accessing the application. So to auto-deploy our application from CI/CD, we need K8s cluster, so let's set it up first. For the demo, we will be using &lt;a href="https://cloud.okteto.com" rel="noopener noreferrer"&gt;Okteto Cloud&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Okteto
&lt;/h3&gt;

&lt;p&gt;Okteto provides a free K8s cluster. There are two ways to set up our k8s cluster. CLI and GUI.&lt;/p&gt;

&lt;h4&gt;
  
  
  CLI
&lt;/h4&gt;

&lt;p&gt;Go to &lt;a href="https://okteto.com/docs/getting-started/#Step-1-Install-the-Okteto-CLI" rel="noopener noreferrer"&gt;Okteto CLI&lt;/a&gt; page and follow the instructions to setup k8s cluster using CLI.&lt;/p&gt;

&lt;h4&gt;
  
  
  GUI
&lt;/h4&gt;

&lt;p&gt;Go to &lt;a href="https://cloud.okteto.com" rel="noopener noreferrer"&gt;Okteto Cloud&lt;/a&gt;, log in using your GitHub account and create namespace accordingly. In my case, my Okteto namespace will be &lt;code&gt;thakkaryash94&lt;/code&gt;. That's it. Now, we have a K8s cluster with our namespace. Now click on &lt;code&gt;Credentials&lt;/code&gt; from the left sidebar. This will download &lt;code&gt;okteto-kube.config&lt;/code&gt; file. This is actually our kube-config file, we can access our namespace by using this config file. Now, we need to set up our application, so that we can deploy it on Okteto cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skaffold
&lt;/h3&gt;

&lt;p&gt;Skaffold handles the workflow for building, pushing and deploying your application, allowing you to focus on what matters most: writing code. - This statement is directly from the website.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lightweight: client-side only, no on-cluster component&lt;/li&gt;
&lt;li&gt;Works Everywhere: you can use profiles, local user config, environment variables&lt;/li&gt;
&lt;li&gt;Feature Rich: Kubernetes-native development, including policy-based image tagging, resource port-forwarding and logging, file syncing, and much more&lt;/li&gt;
&lt;li&gt;Optimized Development: instant feedback while developing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have finalized the tools that we need in our CI/CD pipeline. We need kubectl and Skaffold. Our k8s cluster is also ready to access the deployments. Now, we need to set up our CI/CD pipeline. We will be using GitLab CI/CD pipeline because I find it very easy and convenient to setup.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setup:
&lt;/h4&gt;

&lt;p&gt;We need to download skaffold on our local machine. Follow this &lt;a href="https://skaffold.dev/docs/install/" rel="noopener noreferrer"&gt;link&lt;/a&gt; and setup as per your OS. Skaffold provides 5 Pipeline Stages. &lt;/p&gt;

&lt;p&gt;Build, Test, Tag, Render, Deploy. We can use Skaffold for local development as well with minikube. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fskaffold.dev%2Fimages%2Fworkflow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fskaffold.dev%2Fimages%2Fworkflow.png" alt="Skaffold Workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Project folder structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;skaffold-example

&lt;ul&gt;
&lt;li&gt;backend (actual application)&lt;/li&gt;
&lt;li&gt;k8s

&lt;ul&gt;
&lt;li&gt;deployment.yaml&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;.gitlab-ci.yml&lt;/li&gt;

&lt;li&gt;skaffold.yaml&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You can use any existing docker project for this. Yes, it definetly requires &lt;code&gt;Dockerfile&lt;/code&gt; or you can use any of the &lt;a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples" rel="noopener noreferrer"&gt;example projects&lt;/a&gt;. We will be deploying nodejs example for the demo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;Keep your application folder under &lt;code&gt;backend&lt;/code&gt; folder.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;skaffold.yaml
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: skaffold/v2alpha2
kind: Config
build:
  tagPolicy:
    gitCommit: {}       # use git commit policy
  artifacts:
  - image: registry.gitlab.com/thakkaryash94/skaffold-example
    context: backend
    sync:
      manual:
      # Sync all the javascript files that are in the src folder
      # with the container src folder
      - src: 'src/**/*.js'
        dest: .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ skaffold dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, skaffold will use the &lt;code&gt;skaffold.yaml&lt;/code&gt; file and start local dev environment with nodejs docker container. Now, let's break down our yaml config file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tagPolicy: There are various tag policies skaffold provies. gitCommit, sha256, ,envTemplate dateTime&lt;/li&gt;
&lt;li&gt;context: Actual folder path. In our case, it is the &lt;code&gt;backend&lt;/code&gt; folder.&lt;/li&gt;
&lt;li&gt;sync: There are two modes.

&lt;ul&gt;
&lt;li&gt;Inferred sync mode: only need to specify which files are eligible for syncing in the sync rules.&lt;/li&gt;
&lt;li&gt;Manual sync mode: A manual sync rule must specify the &lt;code&gt;src&lt;/code&gt; and &lt;code&gt;dest&lt;/code&gt; field. The &lt;code&gt;src&lt;/code&gt; field is a glob pattern to match files relative to the artifact &lt;em&gt;context&lt;/em&gt; directory, which may contain &lt;code&gt;**&lt;/code&gt; to match nested files.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;We will be using manual mode for better file controlling.&lt;/p&gt;

&lt;p&gt;When we execute &lt;code&gt;skaffold deploy&lt;/code&gt; or &lt;code&gt;skaffold run&lt;/code&gt;, by default it will look for &lt;code&gt;k8s/*.yaml&lt;/code&gt; files and apply all the configs. we can change this by adding &lt;code&gt;manifests&lt;/code&gt; in &lt;code&gt;skaffold.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;k8s/deployment.yaml
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: node
  annotations:
    dev.okteto.com/auto-ingress: "true"
spec:
  type: ClusterIP
  ports:
  - name: "node"
    port: 3000
  selector:
    app: node
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: node
spec:
  selector:
    matchLabels:
      app: node
  template:
    metadata:
      labels:
        app: node
    spec:
      containers:
      - name: skaffold-example
        image: registry.gitlab.com/thakkaryash94/skaffold-example
        ports:
        - containerPort: 3000
      imagePullSecrets:
        - name: gitlab-secret        # private registry secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our development setup is ready. it's time to make it live on our Okteto k8s cluster. To do that, we will be using GitLab CI/CD pipeline to build and run the application container.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitLab CI
&lt;/h3&gt;

&lt;p&gt;GitLab CI/CD is a tool built into GitLab for software development through the  &lt;a href="https://docs.gitlab.com/ce/ci/introduction/index.html#introduction-to-cicd-methodologies" rel="noopener noreferrer"&gt;continuous methodologies&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Continuous Integration (CI)&lt;/li&gt;
&lt;li&gt;  Continuous Delivery (CD)&lt;/li&gt;
&lt;li&gt;  Continuous Deployment (CD)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our folder structure will be like below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, we will encode our kube-config file into base64 and store it on GitLab Project&amp;gt; Settings &amp;gt; CI/CD &amp;gt; Variables. We store our kube-config encoded data into a variable name &lt;code&gt;KUBE_CONFIG&lt;/code&gt;. So when our piepline runs, it will pickup the variable data and store it in a config file. Run below command to get the base64 value for our Okteto kubernetes namespace.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat ~/Downloads/okteto-kube.config | base64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now, the most important &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: docker

services:
 - docker:dind

stages:
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  KUBE_CONFIG_FILE: /etc/deploy/config

deploy:
  stage: deploy
  script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
    - mkdir -p /etc/deploy                                  # Create a folder for config file
    - echo ${KUBE_CONFIG} | base64 -d &amp;gt; ${KUBE_CONFIG_FILE}      # Write kubernetes config in config file
    - apk add --update --no-cache curl git     # Install dependencies
    - curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl    # Download kubectl binary
    - chmod +x ./kubectl
    - mv ./kubectl /usr/local/bin/kubectl
    - curl -Lo skaffold https://storage.googleapis.com/skaffold/builds/latest/skaffold-linux-amd64              # Download skaffold binary
    - chmod +x skaffold
    - ./skaffold run --kubeconfig /etc/deploy/config       # actual build, tag, push and deploy command
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we push our code to GitLab. It will automatically start reading &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file and start running pipeline for us. Depends upon the Dockerfile steps, it may take few minutes.&lt;/p&gt;

&lt;p&gt;After Job successfully finished, go to Okteto cloud dashboard, you should be able to see our application deployment with &lt;code&gt;running&lt;/code&gt; status. Click on the link and you should be able to acces the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;: Okteto will automatically creates and add the ingress for us based on our &lt;code&gt;deployment.yaml&lt;/code&gt; file service config. &lt;/p&gt;

&lt;h4&gt;
  
  
  Help links:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://gitlab.com/thakkaryash94/skaffold-example" rel="noopener noreferrer"&gt;https://gitlab.com/thakkaryash94/skaffold-example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://okteto.com/docs/getting-started/index.html" rel="noopener noreferrer"&gt;https://okteto.com/docs/getting-started/index.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/nodejs" rel="noopener noreferrer"&gt;https://github.com/GoogleContainerTools/skaffold/tree/master/examples/nodejs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>skaffold</category>
      <category>okteto</category>
      <category>gitlab</category>
    </item>
  </channel>
</rss>
