<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jinwook Baek</title>
    <description>The latest articles on DEV Community by Jinwook Baek (@kokospapa8).</description>
    <link>https://dev.to/kokospapa8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kokospapa8"/>
    <language>en</language>
    <item>
      <title>Majestic Monolith Django</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Sun, 17 Apr 2022 07:34:50 +0000</pubDate>
      <link>https://dev.to/kokospapa8/majestic-monolith-django-3690</link>
      <guid>https://dev.to/kokospapa8/majestic-monolith-django-3690</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap1h3bvum61mdt1fgewd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap1h3bvum61mdt1fgewd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;




&lt;p&gt;&lt;a href="https://github.com/kokospapa8/majestic-monolith-django" rel="noopener noreferrer"&gt;https://github.com/kokospapa8/majestic-monolith-django&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This project is a starter django repo aiming to achieve majestic &lt;code&gt;modular&lt;/code&gt; monolith architecture. Main purpose is to provide scaffolding skeleton and sample architecture for rapid prototype structure that can scale to mid-range size application. I have complied useful techniques and libraries to help build backend API server.&lt;/p&gt;

&lt;p&gt;Inspired by &lt;a href="https://m.signalvnoise.com/the-majestic-monolith/" rel="noopener noreferrer"&gt;Majestic monolith&lt;/a&gt; and &lt;a href="https://www.feldroy.com/books/two-scoops-of-django-3-x" rel="noopener noreferrer"&gt;Two Scoops of Django&lt;/a&gt;, this starter code will help you build scalable application for a small team of developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Majestic monolith
&lt;/h2&gt;




&lt;p&gt;MicroService is everywhere and no doubt that they are the next big thing, for a company with many developers and in need for concurrent feature releases.&lt;/p&gt;

&lt;p&gt;However, MSA needs a lot of coordination and preparation to make is work. If you are the only developer in the team or developing ina a relatively small to medium scale architecture, MSA can be overwhelming.&lt;/p&gt;

&lt;p&gt;You can reduce cognitive load by following DDD practice. With &lt;strong&gt;code isolation&lt;/strong&gt;, &lt;strong&gt;data isolation&lt;/strong&gt; and some cloud architecture help, majestic monolith django(MMD) can prepare for the scale and bigger team coordination.&lt;/p&gt;

&lt;h1&gt;
  
  
  Example application
&lt;/h1&gt;




&lt;p&gt;This repo provides sample application illustrating following usecase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmk4cmagfwbud9fi2o6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmk4cmagfwbud9fi2o6v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmynjyd1p79g1sh9z9hvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmynjyd1p79g1sh9z9hvq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have broken down the application into four modules(&lt;code&gt;auth&lt;/code&gt;, &lt;code&gt;user&lt;/code&gt;, &lt;code&gt;shipping&lt;/code&gt;, &lt;code&gt;distribution&lt;/code&gt;) using techniques I used for application prototyping.&lt;/p&gt;

&lt;h1&gt;
  
  
  Module Structure
&lt;/h1&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;Each domain consists of following structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse52p037mz29o6a6lwr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse52p037mz29o6a6lwr4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;admin: django admin&lt;/li&gt;
&lt;li&gt;apps: Django App Config&lt;/li&gt;
&lt;li&gt;choices: Enum files used in module&lt;/li&gt;
&lt;li&gt;docs: yasg doc&lt;/li&gt;
&lt;li&gt;events: events emitted from django module&lt;/li&gt;
&lt;li&gt;exceptions: Custom exceptions&lt;/li&gt;
&lt;li&gt;models: Ordinary django models&lt;/li&gt;
&lt;li&gt;manager: Queryset managers&lt;/li&gt;
&lt;li&gt;serializers: DRF serialiers&lt;/li&gt;
&lt;li&gt;selectors: Query that requires join&lt;/li&gt;
&lt;li&gt;services: Biz logics for domain

&lt;ul&gt;
&lt;li&gt;domain services (domain specific biz logic)&lt;/li&gt;
&lt;li&gt;application services (UoW)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;schema: api or model schema for testing&lt;/li&gt;

&lt;li&gt;urls: django url resolver&lt;/li&gt;

&lt;li&gt;utils_*: utils method&lt;/li&gt;

&lt;li&gt;views: DRF views&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;p&gt;In order to achieve modular structure in single monolith, we need to use eventbus for inter-module communications. Also we are fully utilizing lambda compute to unburden load of api servers. Python application typically utilizes celery beat for heartbeat (cron) process but it is much easier to use eventbridge schedule with lambda calling heartbeat api.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgarae5k9auxj7y6wlqom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgarae5k9auxj7y6wlqom.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Eventbus

&lt;ul&gt;
&lt;li&gt;In order to decouple models, mmd uses serverless event bus(Eventbridge)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;SAM (lambda) for async compute

&lt;ul&gt;
&lt;li&gt;notification&lt;/li&gt;
&lt;li&gt;long runnign tasks&lt;/li&gt;
&lt;li&gt;Heartbeat with lambda to call api&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cookie cutter
&lt;/h2&gt;

&lt;p&gt;You can use cookie cutter to start the repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;pip&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;cookiecutter&lt;/span&gt;
&lt;span class="nx"&gt;cookiecutter&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//github.com/kokospapa8/majestic-monolith-django.git --checkout cookie-cutter&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Future todo
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Async support&lt;/li&gt;
&lt;li&gt;Dependency injection&lt;/li&gt;
&lt;li&gt;SQS for batch POST request&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;Try out my sample application! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kokospapa8/majestic-monolith-django/" rel="noopener noreferrer"&gt;https://github.com/kokospapa8/majestic-monolith-django/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrate log analysis from AWS ElasticSearch to BigQuery</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Fri, 29 Jan 2021 14:17:26 +0000</pubDate>
      <link>https://dev.to/kokospapa8/migrate-log-analysis-from-aws-elasticsearch-to-bigquery-28p</link>
      <guid>https://dev.to/kokospapa8/migrate-log-analysis-from-aws-elasticsearch-to-bigquery-28p</guid>
      <description>&lt;p&gt;&lt;a href="https://blog.kokospapa.com/kokospapa/Migrate-log-analysis-from-AWS-ElasticSearch-to-BigQuery-5e1f145b4fba4c8095df7df65ca612c3" rel="noopener noreferrer"&gt;Original blog post&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I have been using AWS ElasticSearch for near-real time analysis for API servers. AWS provides built-in Elasticsearch subscription filter for CloudWatch. With less than a hour of effort I can spin up an Elasticsearch cluster to visualize and analyze server logs. However AWS Elasticsearch is not cheap in production setup, also as logs accumulate, it needs some maintenance (lifecycle policy, JVM pressure, etc). Also dev team was not fully utilizing the elastic stack as much. I have been meaning to decommission the Elasticsearch and find alternative option for several month. And finally I decided that it's time to migrate to other solution for following reason.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/kr/blog/why-license-change-AWS" rel="noopener noreferrer"&gt;Amazon: NOT OK - why we had to change Elastic licensing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After some survey, I listed requirements for the substitue&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Near-realtime analysis (no batch load)&lt;/li&gt;
&lt;li&gt;Simple filter and aggregation by time range, api status, etc&lt;/li&gt;
&lt;li&gt;Access management&lt;/li&gt;
&lt;li&gt;Support for visualization tool (substitue Kibana)&lt;/li&gt;
&lt;li&gt;No extra setup on server code (no splunk, fluntd, etc)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anchoring CloudWatch logs are the inception point of all the logs(nginx, api logs, etc) There were couple of alternatives I could think about.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vanilla CloudWatch (manually searching through logs and no visulization, query support → no need to implement anything else)&lt;/li&gt;
&lt;li&gt;CloudWatch → S3(firehose) → Athena&lt;/li&gt;
&lt;li&gt;CloudWatch → S3(firehose) → Redshift Spectrum&lt;/li&gt;
&lt;li&gt;CloudWatch → Kinesis Stream → Kinesis Analysis&lt;/li&gt;
&lt;li&gt;CloudWatch → Lambda → BigQuery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was initially inclined to choose something from AWS since I prefer to have less moving pieces and do less coding. However I am already using BigQuery for Data warehouse. I concluded that if I was going to stash some data somewhere I might as well centralize all of them in single place. So I chose to go with lambda+Bigquery. For this post, I will be using nginx log for example, since these formats are common for most of people.&lt;/p&gt;

&lt;h3&gt;
  
  
  Load vs. Stream
&lt;/h3&gt;

&lt;p&gt;Before going into actual setup process I must acknowledge couple things such as distinction between&lt;code&gt;Load&lt;/code&gt; and &lt;code&gt;streaming&lt;/code&gt;. Load is when you load batch of data for once or in recurrence. In BigQuery, &lt;code&gt;Load&lt;/code&gt; is free and it only charge for storage. However &lt;code&gt;Stream&lt;/code&gt; incurs charge and each insert is &lt;code&gt;$0.010 per 200 MB&lt;/code&gt; . &lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;GCP Account with billing enabled (streaming is disabled on free tier)&lt;/li&gt;
&lt;li&gt;Nginx log on Cloudwatch&lt;/li&gt;
&lt;li&gt;SAM environment setup + Lambda role (refer to previous blog post regarding lambda env setup)
&lt;a href="https://www.notion.so/Migrate-python-async-worker-to-asynchrounous-Lambda-36a0551c835b4bd1b6559b0d9450bb56" rel="noopener noreferrer"&gt;Migrate python async worker to asynchrounous Lambda &lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Setup
&lt;/h1&gt;

&lt;p&gt;I am going to cover following components&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inspect Cloudwatch logs&lt;/li&gt;
&lt;li&gt;BigQuery table&lt;/li&gt;
&lt;li&gt;Lambda code&lt;/li&gt;
&lt;li&gt;Cloudwatch subscription&lt;/li&gt;
&lt;li&gt;Biquery + datastudio&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0z49darc2nxft70oyn76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0z49darc2nxft70oyn76.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  BigQuery Table and Schema
&lt;/h2&gt;

&lt;p&gt;First of all, I need to create BigQuery table for the nginx logs to be stored. In order to create correct schema, let's take a look at how the logs will be sent from CloudWatch to lambda subscription filter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nginx log
&lt;/h3&gt;

&lt;p&gt;Default  nginx log format looks like this. (My log sample has remote ip appended at the end.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;log_format&lt;/span&gt; &lt;span class="n"&gt;combined&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$remote_addr - $remote_user [$time_local] &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
                    &lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="s"&gt;$request&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; $status $body_bytes_sent &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
                    &lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="s"&gt;$http_referer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$http_user_agent&lt;/span&gt;&lt;span class="sh"&gt;"'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;sample&lt;/span&gt;
&lt;span class="mf"&gt;172.11&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;1.1111&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Jan&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GET /api/healthcheck/ HTTP/1.1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CRAZY_USER_AGENT/2.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;111.111.111.111&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I need to transform this log in order to insert to bigquery table in certain format. Thankfully, CloudWatch subscription filter provide filter pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html" rel="noopener noreferrer"&gt;Filter and Pattern Syntax&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I provide following filter pattern like &lt;code&gt;[internal_ip, identitiy , auth_user, date, request, status, bytes, referer, useragent, remote_addr]&lt;/code&gt;, subscription filter automatically breaks down the log in json format as &lt;code&gt;extractedFields&lt;/code&gt;. You can even select certain logs for this filter like &lt;code&gt;[internal_ip, identity , auth_user, date, request !="**/api/healthcheck/**", status, bytes, referer, useragent, remote_addr]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we know that the log will come in certain format, I will create a table with following format.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="nb"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;remote_addr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;useragent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;referer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;numeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;auth_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;internal_ip&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="k"&gt;identity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If I need more fields later, I will add them. Make sure that the table is partitioned by &lt;code&gt;timestamp&lt;/code&gt; for the query performance and also expire the table after certain time for the cost reduction. (BQ applies cold storage for untouched data after certain periods of time so you might just want to keep them)&lt;/p&gt;
&lt;h3&gt;
  
  
  Create BigQuery table
&lt;/h3&gt;

&lt;p&gt;Let's go to GCP console and create table with following schema.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvqq9drl1z64a5byvxch9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvqq9drl1z64a5byvxch9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use cloudshell if you prefer CLI.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bq mk \
--table \
--description description \
--time_partitioning_field timestamp \
--time_partitioning_type DAY \
--label key:value, key:value \
&amp;lt;project_id&amp;gt;:&amp;lt;dataset&amp;gt;.&amp;lt;table&amp;gt; \
id:string,timestamp:timestamp,remote_addr:string,\
useragent:string,referer:string,bytes:numeric,status:string,\
request:string,auth_user:string,date:string,internal_ip:string,\
identity:string,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once table is created, copy the table name from detail tab. &lt;code&gt;&amp;lt;project_id&amp;gt;.&amp;lt;dataset&amp;gt;.&amp;lt;table-name&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Service Account
&lt;/h3&gt;

&lt;p&gt;Before going back to AWS and create a lambda function, we need to create a service account for lambda in order to gain permission to insert rows to BQ.&lt;/p&gt;

&lt;p&gt;Go to service account under IAM menu and click &lt;code&gt;+ create service account&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.cloud.google.com/iam-admin/serviceaccounts" rel="noopener noreferrer"&gt;Google Cloud Platform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;service account name : lambda_bq_stream&lt;/p&gt;

&lt;p&gt;permissions : &lt;code&gt;bigquery dataeditor&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;you can defined custom role with more restrictive permission you only need&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bigquery.tables.list&lt;/li&gt;
&lt;li&gt;bigquery.tables.get&lt;/li&gt;
&lt;li&gt;bigquery.tables.updateData&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you created a service account you can download a json key  click &lt;code&gt;Add key -&amp;gt; create new key&lt;/code&gt;. Download and stash somewhere safe, we will be using this key later.&lt;/p&gt;

&lt;p&gt;Json key sample&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"service_account"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"project_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"****"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"private_key_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"****"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"private_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"****"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"client_email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"****"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"client_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"****"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"auth_uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://accounts.google.com/o/oauth2/auth"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"token_uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://oauth2.googleapis.com/token"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"auth_provider_x509_cert_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.googleapis.com/oauth2/v1/certs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"client_x509_cert_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.googleapis.com/robot/v1/metadata/x509/lambda-bigquery-stream%40hangfive-26bb4.iam.gserviceaccount.com"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I will use SAM for lambda deployment, please refer to previous post for the SAM setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.notion.so/Migrate-python-async-worker-to-asynchrounous-Lambda-36a0551c835b4bd1b6559b0d9450bb56" rel="noopener noreferrer"&gt;Migrate python async worker to asynchrounous Lambda &lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Lambda Fucntion
&lt;/h2&gt;

&lt;p&gt;Ok, now let's focus on the code. This function will receive stream of cloudwatch logs and stream insert rows to BQ table using google bigquery SDK written in python. I will break down the code in pieces to explanation.&lt;/p&gt;
&lt;h3&gt;
  
  
  Library
&lt;/h3&gt;

&lt;p&gt;All you need is single pip library.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://googleapis.dev/python/bigquery/latest/index.html" rel="noopener noreferrer"&gt;google-cloud-bigquery&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Authentication
&lt;/h3&gt;

&lt;p&gt;Client requires authentication and there are two ways to authenticate; from &lt;code&gt;file&lt;/code&gt; or json  &lt;code&gt;dictionary&lt;/code&gt;.  There are couple options to do this.&lt;br&gt;
&lt;a href="https://googleapis.dev/python/google-auth/latest/reference/google.oauth2.service_account.html#google.oauth2.service_account.Credentials.from_service_account_file" rel="noopener noreferrer"&gt;google-auth&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Json file

&lt;ul&gt;
&lt;li&gt;upload file to s3 and load them from lambda&lt;/li&gt;
&lt;li&gt;package json with the lambda code → (x)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://googleapis.dev/python/google-auth/latest/reference/google.oauth2.service_account.html#google.oauth2.service_account.Credentials.from_service_account_info" rel="noopener noreferrer"&gt;google-auth&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dictionary

&lt;ul&gt;
&lt;li&gt;environment variable → I will use this for the purpose of simplicity&lt;/li&gt;
&lt;li&gt;load key info from secret manager or parameter store&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Environment Variable
&lt;/h3&gt;

&lt;p&gt;Although it is better to manage key from secret manager, I will use env var for the simplicity of exercise. I will supply following Environment Variable from json dictionary I have downloaded from previous sections&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "project_id":
  "private_key_id":
  "private_key":
  "client_email": 
  "client_id": 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Decompress and decode cloudwatch logs
&lt;/h3&gt;

&lt;p&gt;When you add lambda subscription filter to CloudWatch, the CloudWatch log event will be sent in gzipped / base64 encoded data. This is an example format. (Don't try to decode the data it's broken.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fburthiuaeaodco5mk3b8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fburthiuaeaodco5mk3b8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You decompress and decode the message with following code.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;compressed_payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;b64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;cloudwatch_payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;zlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decompress&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;compressed_payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;zlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MAX_WBITS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;json_payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cloudwatch_payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Resulting message will be decoded as following example format. Notice &lt;code&gt;logEvents&lt;/code&gt; for the list of logs. Notice that &lt;code&gt;message&lt;/code&gt; is actually log and they are broken as into &lt;code&gt;extractedFields&lt;/code&gt; according to filter supplied as &lt;code&gt;[internal_ip, identity , auth_user, date, request, status, bytes, referer, useragent, remote_addr]&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'messageType':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'DATA_MESSAGE'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'owner':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mi"&gt;12341244&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'logGroup':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'/ecs/nginx'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'logStream':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'ecs/nginx/&lt;/span&gt;&lt;span class="mi"&gt;65366&lt;/span&gt;&lt;span class="err"&gt;ed&lt;/span&gt;&lt;span class="mi"&gt;9299&lt;/span&gt;&lt;span class="err"&gt;f&lt;/span&gt;&lt;span class="mi"&gt;4554&lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;cf&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;dbd&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="err"&gt;c&lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="err"&gt;ee&lt;/span&gt;&lt;span class="mi"&gt;08&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'subscriptionFilters':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;'nginx_filter_sample'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'logEvents':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'id':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mi"&gt;35945479414329140852051181308733124824760966176424001536&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'timestamp':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1611851043287&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'message':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mf"&gt;172.11&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;1.1111&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="err"&gt;/Jan/&lt;/span&gt;&lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET /api/healthcheck/ HTTP/1.1"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CRAZY_USER_AGENT/2.0"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"111.111.111.111"&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'extractedFields':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'date':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mi"&gt;28&lt;/span&gt;&lt;span class="err"&gt;/Jan/&lt;/span&gt;&lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'request':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/api/healthcheck/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;HTTP/&lt;/span&gt;&lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'referer':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'-'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'remote_addr':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'-'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'bytes':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'ip':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mf"&gt;172.11&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;1.111&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'useragent':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'ELB-HealthChecker/&lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'identity':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'-'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'auth_user':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'-'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'status':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  SKIP CONTROL MESSAGE
&lt;/h3&gt;

&lt;p&gt;If control message is passed, we will skip insert&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;json_payload&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;messageType&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;CONTROL_MESSAGE&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Create payload
&lt;/h3&gt;

&lt;p&gt;We will create payload for the nginx log. Notice I have devided timestamp by 1000 since BQ supports microsecond resolution for timestamp type column.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;rows_to_insert&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;json_payload&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;logEvents&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;extractedFields&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;extractedFields&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
            &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt;

    &lt;span class="n"&gt;rows_to_insert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Inserting rows
&lt;/h3&gt;

&lt;p&gt;Before inserting to BQ, let's take a quick look at the description of &lt;code&gt;insert_rows_json&lt;/code&gt; method.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;insert_rows_json&lt;/code&gt; method description
Make sure to read &lt;code&gt;row_ids&lt;/code&gt; params. It's a unique identifier in order to maintain deduplication.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="s2"&gt;"""Insert rows into a table without applying local type conversions.

        See
        https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll

        Args:
            table (Union[ &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
                google.cloud.bigquery.table.Table &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
                google.cloud.bigquery.table.TableReference, &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
                str &lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
            ]):
                The destination table for the row data, or a reference to it.
            json_rows (Sequence[Dict]):
                Row data to be inserted. Keys must match the table schema fields
                and values must be JSON-compatible representations.
            row_ids (Optional[Sequence[Optional[str]]]):
                Unique IDs, one per row being inserted. An ID can also be
                ``None``, indicating that an explicit insert ID should **not**
                be used for that row. If the argument is omitted altogether,
                unique IDs are created automatically.
            skip_invalid_rows (Optional[bool]):
                Insert all valid rows of a request, even if invalid rows exist.
                The default value is ``False``, which causes the entire request
                to fail if any invalid rows exist.
            ignore_unknown_values (Optional[bool]):
                Accept rows that contain values that do not match the schema.
                The unknown values are ignored. Default is ``False``, which
                treats unknown values as errors.
            template_suffix (Optional[str]):
                Treat ``name`` as a template table and provide a suffix.
                BigQuery will create the table ``&amp;lt;name&amp;gt; + &amp;lt;template_suffix&amp;gt;``
                based on the schema of the template table. See
                https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables
            retry (Optional[google.api_core.retry.Retry]):
                How to retry the RPC.
            timeout (Optional[float]):
                The number of seconds to wait for the underlying HTTP transport
                before using ``retry``.

        Returns:
            Sequence[Mappings]:
                One mapping per row with insert errors: the "&lt;/span&gt;&lt;span class="err"&gt;index&lt;/span&gt;&lt;span class="s2"&gt;" key
                identifies the row, and the "&lt;/span&gt;&lt;span class="err"&gt;errors&lt;/span&gt;&lt;span class="s2"&gt;" key contains a list of
                the mappings describing one or more problems with the row.

        Raises:
            TypeError: if `json_rows` is not a `Sequence`.
        """&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Following code will insert a row into BQ table&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;table_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;project_id&amp;gt;.&amp;lt;dataset&amp;gt;.&amp;lt;table-name&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;errors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_rows_json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;table_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rows_to_insert&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Make an API request.
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="p"&gt;[]:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;New rows have been added.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Encountered errors while inserting rows: {}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Full Lambda Code
&lt;/h3&gt;

&lt;p&gt;Now that we have covered all the parts, let's combine together and create a function.&lt;/p&gt;
&lt;h3&gt;
  
  
  Full Lambda Code
&lt;/h3&gt;

&lt;p&gt;Now that we have covered all the parts, let's combine together and create a function.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h3&gt;
  
  
  Test the function with sample events
&lt;/h3&gt;

&lt;p&gt;Let's test the code for sample event payload. As a reminder, you can invoke function with following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;sam&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;local&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;invoke&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;--env-vars&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;env.json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-e&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;events/payload.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you see following success message, let's go to BigQuery table and check if data is inserted successfully.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxhgtm6pq7jbe9qba7o63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxhgtm6pq7jbe9qba7o63.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Successfully streamed to the table!&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe2tcdlj8mqruex0ivwfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe2tcdlj8mqruex0ivwfu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  CloudWatch Subscription filter
&lt;/h2&gt;

&lt;p&gt;IT's time to attach lambda subscription on log group. Refer to following CLI command or console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/logs/put-subscription-filter.html" rel="noopener noreferrer"&gt;put-subscription-filter - AWS CLI 1.18.221 Command Reference&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;put&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;subscription&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;
&lt;span class="c1"&gt;--log-group-name &amp;lt;value&amp;gt;&lt;/span&gt;
&lt;span class="c1"&gt;--filter-name nginxToBQ&lt;/span&gt;
&lt;span class="c1"&gt;--filter-pattern [internal_ip, identity , auth_user, date, request, status, bytes, referer, useragent, remote_addr]&lt;/span&gt;
&lt;span class="c1"&gt;--destination-arn &amp;lt;lambdaFUNCTIONARN&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0fbvr5mi8qceuubxmnr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0fbvr5mi8qceuubxmnr9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Check Lambda Log
&lt;/h3&gt;

&lt;p&gt;Let's make sure everything is working as expected before setting it on cruise mode. Check &lt;strong&gt;cloudwatch&lt;/strong&gt; logs for the lambda for the &lt;strong&gt;cloudwatch&lt;/strong&gt; subscription filter. If you don't see any error logs, it's all set!&lt;/p&gt;

&lt;p&gt;If you are not certain that the logs will be delivered as expected, there are couple way to get notification from abnormal errors.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create another lambda to detect error from logs to trigger SNS notification. Refer to the post for detailed setup&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ko/blogs/mt/get-notified-specific-lambda-function-error-patterns-using-cloudwatch/" rel="noopener noreferrer"&gt;https://aws.amazon.com/ko/blogs/mt/get-notified-specific-lambda-function-error-patterns-using-cloudwatch/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setup AWS DevOps Guru with SNS notification. This ML powered devops support service will notify you when there are error on your lambda function automatically. (It's really useful!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/devops-guru/latest/userguide/setting-up.html" rel="noopener noreferrer"&gt;Setting up Amazon DevOps Guru&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Data in BigQuery
&lt;/h1&gt;

&lt;p&gt;After a while if everything is working as expected, you will see streams of data in your table.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxyylha94fzdgqq0qdwch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxyylha94fzdgqq0qdwch.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's time to make some visualization with the data we have. I will make a heat map of user request from the data. First using aggregation, create a query to get count of  each valid &lt;code&gt;remote addr&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;remote_addr&lt;/span&gt; &lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`&amp;lt;project_id&amp;gt;.&amp;lt;dataset&amp;gt;.&amp;lt;table&amp;gt;`&lt;/span&gt; 
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;"2021-01-29"&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;remote_addr&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt; &lt;span class="k"&gt;like&lt;/span&gt; &lt;span class="nv"&gt;"-"&lt;/span&gt; 
&lt;span class="k"&gt;group&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then join this info with &lt;code&gt;geolite2&lt;/code&gt; table. (Since this query uses huge &lt;code&gt;geolite2&lt;/code&gt; table, it will incur cost.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="n"&gt;source_of_ip_addresses&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;

    &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;remote_addr&lt;/span&gt; &lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`hangfive-26bb4.cloudwatch.hangfive-nginx`&lt;/span&gt; 
    &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;"2021-01-29"&lt;/span&gt; &lt;span class="k"&gt;and&lt;/span&gt; &lt;span class="n"&gt;remote_addr&lt;/span&gt; &lt;span class="k"&gt;not&lt;/span&gt;  &lt;span class="k"&gt;like&lt;/span&gt; &lt;span class="nv"&gt;"-"&lt;/span&gt; 
    &lt;span class="k"&gt;group&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;city_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ST_GeogPoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;longitude&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;latitude&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="n"&gt;point&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;city_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;latitude&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;longitude&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;geoname_id&lt;/span&gt;
  &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;NET&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SAFE_IP_FROM_STRING&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;NET&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IP_NET_MASK&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;network_bin&lt;/span&gt;
    &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;source_of_ip_addresses&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;UNNEST&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;GENERATE_ARRAY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="n"&gt;mask&lt;/span&gt;
    &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;BYTE_LENGTH&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;NET&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SAFE_IP_FROM_STRING&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="nv"&gt;`fh-bigquery.geocode.201806_geolite2_city_ipv4_locs`&lt;/span&gt;  
  &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;network_bin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;city_name&lt;/span&gt;  &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;city_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;geoname_id&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query will result in count of user in each city.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy5wkm5vi7ttcoxv8tufl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy5wkm5vi7ttcoxv8tufl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Visualize
&lt;/h3&gt;

&lt;p&gt;Since we don't have support from Kibana anymore, I will visualize the data with &lt;code&gt;Data Studio&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzar6sa1fg1f4jumgvf17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzar6sa1fg1f4jumgvf17.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Data studio
&lt;/h3&gt;

&lt;p&gt;visualize with data studio or your favorite BI tool.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5igwfhcu5kotdslv86rm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5igwfhcu5kotdslv86rm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;We have looked bunch of different services to get here. At the core, as long as you know how to write a lambda code, you can stream and ingest app logs to anywhere regardless cloudplatform. It's different feeling knowing that you span across other cloud platform. I hope my blog helped you and thank you for reading.&lt;/p&gt;
&lt;h1&gt;
  
  
  Extra
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Concurrency
&lt;/h2&gt;

&lt;p&gt;After letting the lambda to work for a while, my &lt;code&gt;devops guru&lt;/code&gt; setup actaully kicked in warning me of concurrency issues. If traffic spike, lambda is bound to be throttled without conccurency configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'AccountId':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="mi"&gt;1111111111&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Region':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'us-east&lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'MessageType':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'NEW_RECOMMENDATION'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'InsightId':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'AGp&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;byn_PgRryjAItOHdAlcAAAAAAAAAAV&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="err"&gt;rYO_CKM&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="err"&gt;aynEOZWZiTd&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;_OV_&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;bsqS'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'Recommendations':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Troubleshoot&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;errors&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;up&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;automatic&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;retries&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;AWS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Lambda'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Description':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Lambda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;function&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;throwing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;high&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;number&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;errors.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;To&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;learn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;about&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;common&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Lambda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;their&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;causes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;mitigation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;strategies&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;see&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;link.'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Reason':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'The&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Errors&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;metric&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;AWS::Lambda::FunctionName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;breached&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;high&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;threshold.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Link':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'RelatedEvents':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'RelatedAnomalies':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'SourceDetails':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="err"&gt;'CloudWatchMetrics':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="err"&gt;'MetricName':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Errors'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="err"&gt;'Namespace':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'AWS::Lambda::FunctionName'&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'Resources':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="err"&gt;'Name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'CloudwatchToBQ'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="err"&gt;'Type':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'AWS::Lambda::FunctionName'&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Configure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;provisioned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;concurrency&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;AWS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Lambda'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Description':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Lambda&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;function&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;having&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;trouble&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;scaling.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;To&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;learn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;how&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;enable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;provisioned&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;concurrency&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;which&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;allows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;function&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;scale&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;without&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;fluctuations&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;latency&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;see&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;link.'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Reason':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'The&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Duration&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;metric&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;AWS::Lambda::FunctionName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;breached&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;high&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;threshold.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'Link':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html#configuration-concurrency-provisioned'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'RelatedEvents':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;'RelatedAnomalies':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'SourceDetails':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="err"&gt;'CloudWatchMetrics':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="err"&gt;'MetricName':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Duration'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="err"&gt;'Namespace':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'AWS::Lambda::FunctionName'&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;'Resources':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="err"&gt;'Name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'CloudwatchToBQ'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="err"&gt;'Type':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'AWS::Lambda::FunctionName'&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdusfmfnf8cvbv8mqr3o1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdusfmfnf8cvbv8mqr3o1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html" rel="noopener noreferrer"&gt;Managing concurrency for a Lambda function&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For remedial action I have inspected lambda metrics and updated reserved concurrencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff6o0ss8rib5thhxkpm1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff6o0ss8rib5thhxkpm1s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;you should inspect your lambda metrics and update concurrency accordingly. Also if this is not enough, I recommend you to add pub/sub or queue service such as SQS to handle the traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming Quota
&lt;/h2&gt;

&lt;p&gt;I need to remind you that Google BigQuery also has streaming quota. (It's 500000 rows per second)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/bigquery/quotas#streaming_inserts" rel="noopener noreferrer"&gt;Quotas and limits | BigQuery | Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/bigquery/streaming-data-into-bigquery" rel="noopener noreferrer"&gt;Streaming data into BigQuery | Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/bigquery/docs/samples/bigquery-table-insert-rows" rel="noopener noreferrer"&gt;Streaming insert | BigQuery | Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://googleapis.dev/python/bigquery/latest/generated/google.cloud.bigquery.client.Client.html#google.cloud.bigquery.client.Client.insert_rows" rel="noopener noreferrer"&gt;google-cloud-bigquery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://google-auth.readthedocs.io/en/latest/reference/google.oauth2.service_account.html#google.oauth2.service_account.Credentials.from_service_account_file" rel="noopener noreferrer"&gt;google-auth&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/blog/products/data-analytics/geolocation-with-bigquery-de-identify-76-million-ip-addresses-in-20-seconds" rel="noopener noreferrer"&gt;Geolocation with BigQuery: De-identify 76 million IP addresses in 20 seconds | Google Cloud Blog&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>elasticsearch</category>
      <category>bigquery</category>
      <category>loganalysis</category>
    </item>
    <item>
      <title>Export AWS/GCP cost and billing data to BigQuery for analytics </title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Thu, 31 Dec 2020 01:33:42 +0000</pubDate>
      <link>https://dev.to/kokospapa8/export-aws-gcp-cost-and-billing-data-to-bigquery-for-analytics-1g3h</link>
      <guid>https://dev.to/kokospapa8/export-aws-gcp-cost-and-billing-data-to-bigquery-for-analytics-1g3h</guid>
      <description>&lt;p&gt;&lt;a href="https://www.notion.so/kokospapa/Export-AWS-GCP-cost-and-billing-data-to-BigQuery-for-analytics-6dc08d8d66074c7083be386d88b8b62f" rel="noopener noreferrer"&gt;Original post&lt;/a&gt; has better layout with images.&lt;/p&gt;




&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftx591arfbfk1un3nt94r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftx591arfbfk1un3nt94r.png" alt="aws"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgh0xqbs5o41bkosucmkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgh0xqbs5o41bkosucmkg.png" alt="gcp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am currently using both GCP and AWS for a certain project. While each cloud providers provides very nice tool and visualizations for their own spending (Cloud billing for GCP and Cost&amp;amp;Usage report for AWS respectively), I wanted to consolidate both cloud providers usage report and visualize using single BI tool. The report does not need to be realtime, I only need daily granularity on my report. (No streaming) &lt;/p&gt;

&lt;p&gt;So I had to spin up something and these were couple options&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;House data to AWS using Athena+Quicksight(or other BI tool)&lt;/li&gt;
&lt;li&gt;House data to GCP using Bigquery+datastudio(or other BI tool)&lt;/li&gt;
&lt;li&gt;Spend more time searching for 3rd party tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am a big fan of BigQuery and I prefer to to &lt;strong&gt;ELT&lt;/strong&gt; rather than &lt;strong&gt;ETL&lt;/strong&gt; on BQ. Moreover, I was already using BigQuery for the project with &lt;a href="https://holistics.io" rel="noopener noreferrer"&gt;holistics.io&lt;/a&gt;, I chose to house usage and cost data to bigquery.&lt;/p&gt;

&lt;h1&gt;
  
  
  Move data to BQ
&lt;/h1&gt;

&lt;h2&gt;
  
  
  GCP Cloud Billing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;This is easy, cloud billing natively support bigquery export. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Enable billing for the GCP project.&lt;/p&gt;

&lt;p&gt;I assume that you already have billing account setup. (You wouldn't need data analytics unless you are spending any moeny) But if you need to enable billing for the project, refer to the following link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/billing/docs/how-to/modify-project#confirm_billing_is_enabled_on_a_project" rel="noopener noreferrer"&gt;Modify a project's billing settings | Cloud Billing | Google Cloud&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a dataset in BigQuery whthin the GCP project you desire - I named it &lt;code&gt;billing_export&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/bigquery/docs/datasets" rel="noopener noreferrer"&gt;Creating datasets | BigQuery | Google Cloud&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You need appropriate permissions to setup export&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Billing account Admin&lt;/li&gt;
&lt;li&gt;Bigquery Admin for the project&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;resourcemanager.projects.update&lt;/code&gt; permission&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6out2dntyfh39vj4eyji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6out2dntyfh39vj4eyji.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable Cloud Billing export to BigQuery dataset

&lt;ul&gt;
&lt;li&gt;Go to &lt;code&gt;Billing&lt;/code&gt; menu in Navigation menu (&lt;a href="https://console.cloud.google.com/billing/" rel="noopener noreferrer"&gt;https://console.cloud.google.com/billing/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Go to &lt;code&gt;linked billing account&lt;/code&gt; (this should be already set, if not you need to add payment info)&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Billing export&lt;/code&gt; → Select &lt;code&gt;BigQuery export&lt;/code&gt; tab&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;edit setting&lt;/code&gt; on Daily cost detail

&lt;ul&gt;
&lt;li&gt;Select the project and dataset where you want the data to be sinked. (note) The BigQuery API is required to export data to BigQuery. If the project you selected doesn't have the BigQuery API enabled, you will be prompted to enable it. Click &lt;strong&gt;Enable BigQuery API&lt;/strong&gt; and the API will be enabled for you&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Once everything is set, you will be able to see your screen like this.&lt;/p&gt;

&lt;p&gt;When you first enable the daily cost detail export to BigQuery, it might take a &lt;strong&gt;few hours&lt;/strong&gt; to start seeing your Google Cloud cost data. Table with &lt;code&gt;gcp_billing_export_v1_&amp;lt;some_hash&amp;gt;&lt;/code&gt; will be generated. Table will be automatically partitioned by day. (you can query partitioned data to save cost)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjgc7p3t4ld6xbc0thm5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjgc7p3t4ld6xbc0thm5c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Schema
&lt;/h3&gt;

&lt;p&gt;After a while you will notice a being exported to a table. You can click table and click preview or query. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;usage_start_time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cost&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;currency&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;unit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;credits&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="nv"&gt;`&amp;lt;project&amp;gt;.billing_export.gcp_billing_export_v1_&amp;lt;hash&amp;gt;`&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;_PARTITIONTIME&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="nv"&gt;"2020-12-01 00:00:00"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxkc7rdff0sv8l3o84i5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxkc7rdff0sv8l3o84i5t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you need specific information for each column, refer to this link. It will help you understand each columns of data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/billing/docs/how-to/export-data-bigquery-tables#data-schema" rel="noopener noreferrer"&gt;Understanding the Cloud Billing data tables in BigQuery | Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Special note on &lt;code&gt;credits&lt;/code&gt; nested field if you are using free tier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That's it for GCP! now to the hard part.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Cost &amp;amp; Usage Report
&lt;/h2&gt;

&lt;p&gt;I didn't want to reinvent the wheel, therefore as all the good developers do, I googled. I found couple of great articles regarding this subject, but they seemed to be complicated and outdated. Luckly, there were already a natively supported &lt;strong&gt;S3 transfer&lt;/strong&gt; service in BigQuery. It supports transfer &lt;code&gt;csv&lt;/code&gt; and &lt;code&gt;parquet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm0j59ag733v7f278er85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm0j59ag733v7f278er85.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
seems too complicated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/bigquery-transfer/docs/s3-transfer" rel="noopener noreferrer"&gt;Amazon S3 transfers | BigQuery Data Transfer Service | Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's save the cost and usage report to S3 first. Go to AWS Cost and Usage Reports. You can create a report here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.aws.amazon.com/billing/home#/reports" rel="noopener noreferrer"&gt;aws usage and cost reports&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs824zlnj6epjeti27txa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs824zlnj6epjeti27txa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftmpew0662y9ab0lxw6ft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftmpew0662y9ab0lxw6ft.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Report name - &lt;code&gt;daily_report_gzip&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Include resource IDs - &lt;code&gt;check&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Data refresh settings - &lt;code&gt;uncheck&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;S3 bucket - create a bucket or select a bucket&lt;/li&gt;
&lt;li&gt;prefix - &lt;code&gt;billing/daily_report_gzip&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Time granularity - &lt;code&gt;Daily&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Report versioning - &lt;code&gt;overwrite&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Compression type - &lt;code&gt;gzip&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After creating report, it takes couple hours to show up on S3. Once report is created, you will see &lt;code&gt;gz&lt;/code&gt; file exported under following URI. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;s3://&amp;lt;bucket&amp;gt;/billing/daily_report_gzip/20201201-20210101/&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;data - &lt;code&gt;daily_report_gzip-00001.csv.gz&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;metadata - &lt;code&gt;daily_report_gzip-Manifest.json&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fscd3wszl9q830q61smss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fscd3wszl9q830q61smss.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Schema
&lt;/h3&gt;

&lt;p&gt;We need to know the schema in order to house the data in BigQuery table. If you want to know what each fields represents, consult the document.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cur/latest/userguide/data-dictionary.html" rel="noopener noreferrer"&gt;Data dictionary&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you open up the &lt;code&gt;daily_report_gzip-Manifest.json&lt;/code&gt; file, you will discover how each columns are structued. Take a note on variety of types - &lt;code&gt;string&lt;/code&gt;, &lt;code&gt;OptionalBigDecimal&lt;/code&gt;, &lt;code&gt;DateInterval&lt;/code&gt; and so on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Json - manifest file example&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fupnvl2iq9vcpuj0riia7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fupnvl2iq9vcpuj0riia7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Converting manifest to bq table schema
&lt;/h3&gt;

&lt;p&gt;In order to transfer the data, the BQ table needs to have correct schema to house the csv data. BQ have different types and there are too many fields to just copy and paste. You can configure table chema with &lt;code&gt;Edit as text&lt;/code&gt;. (ex: Field1:TYPE, Field2:TYPE)  Let's use &lt;code&gt;jq&lt;/code&gt; to extract types as bq table schema. &lt;/p&gt;

&lt;p&gt;I also need to replace some fields type to BigQuery compatible types. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BigDecimal, OptionalBigDecimal → bignumeric&lt;/li&gt;
&lt;li&gt;Interval → string&lt;/li&gt;
&lt;li&gt;datetime → timestamp&lt;/li&gt;
&lt;li&gt;OptionalString → string&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Download the json file and pipe with following jq command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat &lt;/span&gt;daily_report_gzip-Manifest.json | jq &lt;span class="nt"&gt;-jr&lt;/span&gt; &lt;span class="s1"&gt;'.columns[] | .category+"_"+.name+":"+ .type +","'&lt;/span&gt; | &lt;span class="nb"&gt;sed &lt;/span&gt;s/.&lt;span class="nv"&gt;$/&lt;/span&gt;/ | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/OptionalString/String/g'&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/\&amp;lt;Interval\&amp;gt;/String/g'&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/OptionalBigDecimal/BigNumeric/g'&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/BigDecimal/BigNumeric/g'&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/DateTime/Timestamp/g'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;jq -jr '.columns[] | .category+"_"+.name+":"+ .type +","'&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;this will extract each items in array and struct filed as &lt;code&gt;&amp;lt;category&amp;gt;_&amp;lt;name&amp;gt;:&amp;lt;type&amp;gt;&lt;/code&gt;. I needed category prepended because there are duplicate names&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-r&lt;/code&gt; for raw output&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;sed s/.$//&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;remove last &lt;code&gt;,&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;sed 's/&amp;lt;original type&amp;gt;/&amp;lt;bq type&amp;gt;/g'&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;replace type to bq compatible types&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This command will result in following texts&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="n"&gt;identity_LineItemId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;identity_TimeInterval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;bill_InvoiceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;bill_BillingEntity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;bill_BillType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;bill_PayerAccountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;bill_BillingPeriodStartDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;bill_BillingPeriodEndDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_UsageAccountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_LineItemType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_UsageStartDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_UsageEndDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nb"&gt;Timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_ProductCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_UsageType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_Operation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_AvailabilityZone&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_ResourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_UsageAmount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_NormalizationFactor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_NormalizedUsageAmount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_CurrencyCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_UnblendedRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_UnblendedCost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_BlendedRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_BlendedCost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_LineItemDescription&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_TaxType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;lineItem_LegalEntity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_ProductName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_accountAssistance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_alarmType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_architecturalReview&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_architectureSupport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_availability&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_baseProductReferenceCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_bestPractices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_cacheEngine&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_capacitystatus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_caseSeverityresponseTimes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_clientLocation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_clockSpeed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_computeFamily&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_computeType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_cputype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_currentGeneration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_customerServiceAndCommunities&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_databaseEngine&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_deploymentOption&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_dominantnondominant&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_durability&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_ecu&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_endpointType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_engineCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_enhancedNetworkingSupported&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_eventType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_feeDescription&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_fromLocation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_fromLocationType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_group&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_groupDescription&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_includedServices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_insightstype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_instanceFamily&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_instanceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_instanceTypeFamily&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_intelAvx2Available&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_intelAvxAvailable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_intelTurboAvailable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_launchSupport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_licenseModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_locationType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_logsDestination&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_maxIopsBurstPerformance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_maxIopsvolume&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_maxThroughputvolume&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_maxVolumeSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_memoryGib&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_memorytype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_messageDeliveryFrequency&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_messageDeliveryOrder&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_minVolumeSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_networkPerformance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_normalizationSizeFactor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_operatingSystem&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_operation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_operationsSupport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_physicalProcessor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_preInstalledSw&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_pricingUnit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_proactiveGuidance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_processorArchitecture&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_processorFeatures&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_productFamily&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_programmaticCaseManagement&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_queueType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_requestDescription&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_requestType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_resourcePriceGroup&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_routingTarget&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_routingType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_servicecode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_servicename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_sku&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_storage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_storageClass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_storageMedia&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_storageType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_technicalSupport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_tenancy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_thirdpartySoftwareSupport&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_tiertype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_toLocation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_toLocationType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_trafficDirection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_training&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_transferType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_usagetype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_vcpu&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_volumeApiName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_volumeType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;product_whoCanOpenCases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_LeaseContractLength&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_OfferingClass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_PurchaseOption&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_RateId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_currency&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_publicOnDemandCost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_publicOnDemandRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_term&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pricing_unit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_AmortizedUpfrontCostForUsage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_AmortizedUpfrontFeeForBillingPeriod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_EffectiveCost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_EndTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_ModificationStatus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_NormalizedUnitsPerReservation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_NumberOfReservations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_RecurringFeeForUsage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_ReservationARN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_StartTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_SubscriptionId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_TotalReservedNormalizedUnits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_TotalReservedUnits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_UnitsPerReservation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_UnusedAmortizedUpfrontFeeForBillingPeriod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_UnusedNormalizedUnitQuantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_UnusedQuantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_UnusedRecurringFee&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;reservation_UpfrontValue&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;savingsPlan_TotalCommitmentToDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;savingsPlan_SavingsPlanARN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;savingsPlan_SavingsPlanRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;savingsPlan_UsedCommitment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;savingsPlan_SavingsPlanEffectiveCost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;savingsPlan_AmortizedUpfrontCommitmentForBillingPeriod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;savingsPlan_RecurringCommitmentForBillingPeriod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;BigNumeric&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Create BQ Table&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;Let's create a BQ table. You should have a dataset created already if you exported a Cloud billing. If not, let's create now.&lt;/p&gt;

&lt;p&gt;Once dataset if created, click create table.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;table from - &lt;code&gt;Empty table&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Destination - &lt;code&gt;search for a project&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Project name - project you created a dataset&lt;/li&gt;
&lt;li&gt;Dataset name - select one you created&lt;/li&gt;
&lt;li&gt;Talbe name - &lt;code&gt;aws&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For Schema, toggle &lt;code&gt;Edit as text&lt;/code&gt; and paste the schema you generated with &lt;code&gt;jq&lt;/code&gt; command&lt;/li&gt;
&lt;li&gt;Partion - you can skip this or select a timestap filed that you will be commonly filter with. I chose &lt;code&gt;billingPeriodStartData&lt;/code&gt; and partition by &lt;code&gt;day&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F49rcoij3rv2xf2vax88o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F49rcoij3rv2xf2vax88o.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzkzriq52umontss5pnx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzkzriq52umontss5pnx0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Transfer Prereq
&lt;/h3&gt;

&lt;p&gt;Once the table is ready, let's create a transfer.&lt;/p&gt;

&lt;p&gt;You need to have following permissions first.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;bigquery.transfers.update&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bigquery.datasets.update&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These permissions are includede in predefiend &lt;code&gt;bigquery.admin&lt;/code&gt; role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS credential&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create IAM for AWS credential for transfer. You only need &lt;code&gt;AmazonS3ReadOnlyAccess&lt;/code&gt; permission on specific bucket you created, but if you are lazy, just use &lt;code&gt;AmazonS3ReadOnlyAccess&lt;/code&gt; AWS managed policy on all resources. If you don't know how to create a IAM user, please refer to following link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.easydigitaldownloads.com/article/1455-amazon-s3-creating-an-iam-user" rel="noopener noreferrer"&gt;Amazon S3 - Creating an IAM user&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Transfer
&lt;/h3&gt;

&lt;p&gt;On bigquery console, click &lt;code&gt;transfer&lt;/code&gt; menu on left pane. You will be redirected to a transfer configuration page. Click &lt;code&gt;+ Create Transfer&lt;/code&gt; button. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fayuc8wzw4iy6pc9x44t2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fayuc8wzw4iy6pc9x44t2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source - &lt;code&gt;Amazon S3&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Display Name - &lt;code&gt;aws-billing-export&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Schedule options - &lt;code&gt;starts now&lt;/code&gt;, &lt;code&gt;daily&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Destination&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dataset ID - dataset you created&lt;/li&gt;
&lt;li&gt;Destination table - &lt;code&gt;aws&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Amazon S3 URI - &lt;code&gt;s3://&amp;lt;bucket&amp;gt;/billing/daily_report_gzip/*/*.gz&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Access key ID&lt;/li&gt;
&lt;li&gt;Secert access key&lt;/li&gt;
&lt;li&gt;File format - &lt;code&gt;csv&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;Bq support gzip natively&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Transfer option

&lt;ul&gt;
&lt;li&gt;number of errors allowed - &lt;code&gt;0&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;ignore unknown values - &lt;code&gt;check&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;field delimieter - &lt;code&gt;,&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Header rows to skip - &lt;code&gt;1&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Notification

&lt;ul&gt;
&lt;li&gt;you can create new pub/sub and configure notification here. You can configure this later. So leave it blank for now.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt;! &lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdjnzx1w5lkjx2t5jr2c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdjnzx1w5lkjx2t5jr2c4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr8572jwldons436iaull.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr8572jwldons436iaull.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You willing following green check mark if the transfer succeeds. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F33w6slo216t2fcq79llk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F33w6slo216t2fcq79llk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see error with red marker, exame the error logs on right pane. Common erros consist, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wrong S3 URI&lt;/li&gt;
&lt;li&gt;Wrong AWs credential&lt;/li&gt;
&lt;li&gt;Wrong table schema&lt;/li&gt;
&lt;li&gt;No objects&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Cleaning Data
&lt;/h1&gt;

&lt;p&gt;You have two tables populated with billing data! It's time do some analytics and visualizations.&lt;/p&gt;

&lt;p&gt;Let's review fields&lt;/p&gt;

&lt;p&gt;AWS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cur/latest/userguide/data-dictionary.html" rel="noopener noreferrer"&gt;Data dictionary&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GCP&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/billing/docs/how-to/export-data-bigquery-tables#data-schema" rel="noopener noreferrer"&gt;Understanding the Cloud Billing data tables in BigQuery | Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use BQ console to query and testout data for integrity to makesure the load is intact. I won't go into details with sql query. &lt;/p&gt;

&lt;p&gt;Once &lt;strong&gt;E&lt;/strong&gt;xtracted and &lt;strong&gt;L&lt;/strong&gt;oaded, I do &lt;strong&gt;T&lt;/strong&gt;ransform on &lt;a href="http://holistics.io" rel="noopener noreferrer"&gt;Holistics.io&lt;/a&gt; leaving the data intact, but you can also create new view to minimize, clean and consolidate the data into single table or a view using scheduled query feature of BigQuery. &lt;/p&gt;

&lt;h2&gt;
  
  
  Visualize data
&lt;/h2&gt;

&lt;p&gt;After inspecting the with couple queries, and I am just a newbie in sql world, I use Data Studio to inspect the data. &lt;/p&gt;

&lt;p&gt;When you use BigQuery, You can connect data to Data Studio without extra configuration. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4eo650umm0bn5cd9u7xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4eo650umm0bn5cd9u7xc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc199v07amqgqnqwvg86r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc199v07amqgqnqwvg86r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use other BI tool to connect BQ to visualize and analyze you data from here.&lt;/p&gt;

&lt;h1&gt;
  
  
  Extra
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;[ ]  Check data integrity with daily and monthly roll over&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;As BQ support &lt;code&gt;S3 transfer&lt;/code&gt;, it was a lot easier for me to move data around but still there were some work to be done.&lt;/p&gt;

&lt;p&gt;Please let me know if you find more concrete solution to consolidate billing data in single place!&lt;/p&gt;

&lt;p&gt;Thank you for reading here are some pictures of Kokos!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwcndmlugjyah0p8qklav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwcndmlugjyah0p8qklav.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4kieswgao3pw7zcsltd0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4kieswgao3pw7zcsltd0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/billing/docs/how-to/export-data-bigquery" rel="noopener noreferrer"&gt;Export Cloud Billing data to BigQuery | Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.google.com/url?q=https%3A%2F%2Fcloud.google.com%2Fbigquery%2Fdocs%2Fs3-transfer-intro%3Fhl%3Den_US" rel="noopener noreferrer"&gt;Overview of Amazon S3 transfers | BigQuery Data Transfer Service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/bigquery-transfer/docs/s3-transfer-parameters" rel="noopener noreferrer"&gt;Using runtime parameters in transfers | BigQuery Data Transfer Service&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>bigquery</category>
      <category>gcp</category>
      <category>analytics</category>
    </item>
    <item>
      <title>AWS CDK - use your favorite language to define cloud infrastructure</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Thu, 12 Nov 2020 04:35:18 +0000</pubDate>
      <link>https://dev.to/kokospapa8/aws-cdk-use-your-favorite-language-to-define-cloud-infrastructure-40na</link>
      <guid>https://dev.to/kokospapa8/aws-cdk-use-your-favorite-language-to-define-cloud-infrastructure-40na</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Infrastructure as code(IaC) is a popular term in cloud environment. I have been using CloudFormation to automate AWS infrastructure deployment. However I always had hard time writing out the configuration file since they are written in string json or yaml file. There are some best practice for writing out &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html" rel="noopener noreferrer"&gt;CloudFormation&lt;/a&gt; out there, I always have to validate and revalidate the stacks and had to double check whether the infrastructure was deployed as I intended to. Recently I have stumbled upon &lt;strong&gt;AWS CDK&lt;/strong&gt; and I fell in love with it. Here are some plus using CDK.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Organize your project into logical modules&lt;/li&gt;
&lt;li&gt;Use logic (if statements, for-loops, etc) when defining your infrastructure&lt;/li&gt;
&lt;li&gt;Use object-oriented techniques to create a model of your system&lt;/li&gt;
&lt;li&gt;Type-safety, code-completion, and open-source&lt;/li&gt;
&lt;li&gt;Define high level abstractions, share them, and publish them to your team, company, or community&lt;/li&gt;
&lt;li&gt;Share and reuse your infrastructure as a library&lt;/li&gt;
&lt;li&gt;Code completion within your IDE → This was huge plug for me!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, AWS CDK provides high level abstraction of CloudFormation json/yaml syntax.  In this blog post, I will rewrite my previous ecs-sample architecture(beta stage) in CDK. Please refer to my previous blog post &lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F367659%2F1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part2-io1" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate Part2&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul 3 '20&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;
 for infrastructure that I will be deploying with CDK.

&lt;p&gt;&lt;a href="https://github.com/kokospapa8/ecs-fargate-sample-app/blob/master/config/cloudformation/cf-ecs-sample.json" rel="noopener noreferrer"&gt;kokospapa8/ecs-fargate-sample-app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before diving into writing CDK code, let's take a quick look at CDK.&lt;/p&gt;
&lt;h1&gt;
  
  
  CDK Concepts
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo5ezf7dxqe9zd97kjw3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo5ezf7dxqe9zd97kjw3u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three main components in CDK: &lt;strong&gt;contruct&lt;/strong&gt;, &lt;strong&gt;stack&lt;/strong&gt; and &lt;strong&gt;app&lt;/strong&gt;. Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as an S3 bucket or an SNS topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. To foster reusability, constructs can include other constructs. You compose constructs together into stacks, that you can deploy into an AWS environment, and apps, a collection of one of more stacks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd2xt57iac8yb2jxpj2y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd2xt57iac8yb2jxpj2y0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  App
&lt;/h2&gt;

&lt;p&gt;Written in TypeScript, JavaScript, Python, Java, or C# that uses the AWS CDK to define AWS infrastructure&lt;/p&gt;
&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Equivalent to AWS CloudFormation stacks&lt;/li&gt;
&lt;li&gt;Contains construct&lt;/li&gt;
&lt;li&gt;Defines one or more concrete AWS resources&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Constructs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Represented as types in programming.&lt;/li&gt;
&lt;li&gt;Three fundamental flavors

&lt;ul&gt;
&lt;li&gt;L1 (AWS CloudFormaiton only) - directly to resource types defined by AWS CloudFormation

&lt;ul&gt;
&lt;li&gt;Always have names that begin with &lt;code&gt;Cfn&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;L2 - Encapsulate L1 modules, providing sensible defaults and best-practice security policies, supporting resources needed by the primary resource&lt;/li&gt;
&lt;li&gt;L3 - Patterns declare multiple resources to create entire AWS architectures for particular use cases. All the plumbing is already hooked up, and configuration is boiled down to a few important parameters&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Core module
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Imported into code as core or cdk&lt;/li&gt;
&lt;li&gt;Contains constructs used by the AWS CDK itself as well as base classes for constructs, apps, resources, and other AWS CDK objects.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Supported Languages
&lt;/h2&gt;

&lt;p&gt;AWS CDK is developed in one language (TypeScript) and language bindings are generated for the other languages through the use of a tool called JSII. In this post, I will be using &lt;code&gt;python&lt;/code&gt; .&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TypeScript&lt;/li&gt;
&lt;li&gt;JavaScript&lt;/li&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Java&lt;/li&gt;
&lt;li&gt;C#&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Getting started CDK
&lt;/h1&gt;
&lt;h3&gt;
  
  
  Setup AWS Credential
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;span class="nb"&gt;cat&lt;/span&gt; ~/.aws/credentials
&lt;span class="nt"&gt;---&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;default]
&lt;span class="nv"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;AKIAI44QH8XXXXXXXX
&lt;span class="nv"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;je7MtGbClwBFXXXXXXXXXXXXXXXXXXXXX

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Install on Mac OSX
&lt;/h2&gt;

&lt;p&gt;Let's use brew 🍺 to install on Mac OSX.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew update
brew doctor
brew &lt;span class="nb"&gt;install &lt;/span&gt;aws-cdk
cdk &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using other OS, please refer to following link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html" rel="noopener noreferrer"&gt;Getting started with the AWS CDK&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Python environment
&lt;/h3&gt;

&lt;p&gt;you need to have &lt;code&gt;python 3.6&lt;/code&gt; or later including &lt;code&gt;pip&lt;/code&gt; and &lt;code&gt;virtualenv&lt;/code&gt; .&lt;/p&gt;

&lt;h1&gt;
  
  
  Pycharm for IDE
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fz8m94sjm3wpctmet5vi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fz8m94sjm3wpctmet5vi1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use pycharm for python development with AWS toolkit, I strongly recommend use IDE instead of good old  &lt;code&gt;vim&lt;/code&gt; or common text editors. Especially when you are devloping CDK, you don't need to go back and forth digging through huge list of API references.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ko/pycharm/" rel="noopener noreferrer"&gt;AWS Toolkit for PyCharm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively you can use VS Code with AWS toolkit. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ko/visualstudiocode/" rel="noopener noreferrer"&gt;AWS Toolkit for Visual Studio Code&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/kokospapa8/ecs-fargate-sample-app.git
&lt;span class="nb"&gt;cd &lt;/span&gt;config/cdk
&lt;span class="nb"&gt;source&lt;/span&gt; .env/bin/activate

pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;span class="c"&gt;# set env with CDK_DEFAULT_ACCOUNT&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CDK_DEFAULT_ACCOUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;123456789
&lt;span class="c"&gt;# set env with CDK_DEFAULT_REGION&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CDK_DEFAULT_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east-1

cdk synth 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Alternatively) setup sample app instead of pulling github project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk init &lt;span class="nt"&gt;-a&lt;/span&gt; sample-app &lt;span class="nt"&gt;--language&lt;/span&gt; python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;cdk synth&lt;/code&gt; will generate cloudformation stack file in &lt;code&gt;cdk.out&lt;/code&gt; folder. You can just take this CloudFormation files and deploy manually. But there is a easier way to deploy with CDK. We will get into that on later part of the post&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzyia1vn32e3pnl2bdbnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzyia1vn32e3pnl2bdbnz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  App structure
&lt;/h1&gt;

&lt;p&gt;Before deploying the stack, let's review the infrastructure and review stacks one by one.&lt;/p&gt;

&lt;p&gt;We will build following resources &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with public and private subnet&lt;/li&gt;
&lt;li&gt;API EC2 instance on ASG (in public subnet)&lt;/li&gt;
&lt;li&gt;Worker EC2 instance on ASG (in public subnet)&lt;/li&gt;
&lt;li&gt;ALB in front of API ASG&lt;/li&gt;
&lt;li&gt;RDS mysql (in private subnet)&lt;/li&gt;
&lt;li&gt;ElastiCache - Redis (private subnet)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F701vrqtrvwd3ahykyr58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F701vrqtrvwd3ahykyr58.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's look at the file structures.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.py&lt;/code&gt; - application wrapper&lt;/p&gt;

&lt;p&gt;&lt;code&gt;requirements.txt&lt;/code&gt; - required CDK pip libraries&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ecs_sample_cdk&lt;/code&gt; folder consists of following stack files. &lt;code&gt;sample_stack&lt;/code&gt; is wrapper for the whole infrastructure;  &lt;code&gt;alb_stack&lt;/code&gt; , &lt;code&gt;rds_stack&lt;/code&gt; , &lt;code&gt;redis_stack&lt;/code&gt; and &lt;code&gt;vpc_stack&lt;/code&gt; are nested stack of &lt;code&gt;sample_stack&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep-dive into codes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  app.py
&lt;/h3&gt;

&lt;p&gt;Nothing special here. If you want to know what's going on behind the scene please refer to following document. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/apps.html" rel="noopener noreferrer"&gt;Apps&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env python3&lt;/span&gt;
import os

from aws_cdk import core
from ecs_sample_cdk.sample_stack import SampleStack

&lt;span class="nb"&gt;env&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; core.Environment&lt;span class="o"&gt;(&lt;/span&gt;
    &lt;span class="nv"&gt;account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;os.environ.get&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"CDK_DEFAULT_ACCOUNT"&lt;/span&gt;, &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
    &lt;span class="nv"&gt;region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;os.environ.get&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"CDK_DEFAULT_REGION"&lt;/span&gt;, &lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;

app &lt;span class="o"&gt;=&lt;/span&gt; core.App&lt;span class="o"&gt;()&lt;/span&gt;
SampleStack&lt;span class="o"&gt;(&lt;/span&gt;app, &lt;span class="s2"&gt;"ecs-sample"&lt;/span&gt;, &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

app.synth&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sample Stack
&lt;/h3&gt;

&lt;p&gt;This is a class that hold other nested stack and I have used &lt;code&gt;props&lt;/code&gt; to pass around important resource reference such as vpc and security groups. You can also notice &lt;code&gt;add_dependency&lt;/code&gt; method which enforces dependency between different nested stacks. Here we create VPC, ALB stacks then create RDS and Elasticache because we need vpc_id, subnets and security groups in order create these resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;class SampleStack&lt;span class="o"&gt;(&lt;/span&gt;core.Stack&lt;span class="o"&gt;)&lt;/span&gt;:

    def __init__&lt;span class="o"&gt;(&lt;/span&gt;self, scope: core.Construct, &lt;span class="nb"&gt;id&lt;/span&gt;: str, &lt;span class="nb"&gt;env&lt;/span&gt;, &lt;span class="k"&gt;**&lt;/span&gt;kwargs&lt;span class="o"&gt;)&lt;/span&gt; -&amp;gt; None:
        super&lt;span class="o"&gt;()&lt;/span&gt;.__init__&lt;span class="o"&gt;(&lt;/span&gt;scope, &lt;span class="nb"&gt;id&lt;/span&gt;, &lt;span class="k"&gt;**&lt;/span&gt;kwargs&lt;span class="o"&gt;)&lt;/span&gt;
        props &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'namespace'&lt;/span&gt;: &lt;span class="s1"&gt;'sample'&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
        vpc_stack &lt;span class="o"&gt;=&lt;/span&gt; VPCStack&lt;span class="o"&gt;(&lt;/span&gt;self, f&lt;span class="s2"&gt;"{id}-vpc"&lt;/span&gt;, &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;, &lt;span class="nv"&gt;props&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;props&lt;span class="o"&gt;)&lt;/span&gt;
        props.update&lt;span class="o"&gt;(&lt;/span&gt;vpc_stack.output_props&lt;span class="o"&gt;)&lt;/span&gt;

        alb_stack &lt;span class="o"&gt;=&lt;/span&gt; ALBStack&lt;span class="o"&gt;(&lt;/span&gt;self, f&lt;span class="s2"&gt;"{id}-alb"&lt;/span&gt;, &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;, &lt;span class="nv"&gt;props&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;props&lt;span class="o"&gt;)&lt;/span&gt;
        alb_stack.add_dependency&lt;span class="o"&gt;(&lt;/span&gt;vpc_stack&lt;span class="o"&gt;)&lt;/span&gt;
        props.update&lt;span class="o"&gt;(&lt;/span&gt;alb_stack.output_props&lt;span class="o"&gt;)&lt;/span&gt;

        rds_stack &lt;span class="o"&gt;=&lt;/span&gt; RDSStack&lt;span class="o"&gt;(&lt;/span&gt;self, f&lt;span class="s2"&gt;"{id}-rds"&lt;/span&gt;, &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;, &lt;span class="nv"&gt;props&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;props&lt;span class="o"&gt;)&lt;/span&gt;
        rds_stack.add_dependency&lt;span class="o"&gt;(&lt;/span&gt;vpc_stack&lt;span class="o"&gt;)&lt;/span&gt;
        rds_stack.add_dependency&lt;span class="o"&gt;(&lt;/span&gt;alb_stack&lt;span class="o"&gt;)&lt;/span&gt;
        props.update&lt;span class="o"&gt;(&lt;/span&gt;rds_stack.output_props&lt;span class="o"&gt;)&lt;/span&gt;

        redis_stack &lt;span class="o"&gt;=&lt;/span&gt; RedisStack&lt;span class="o"&gt;(&lt;/span&gt;self, f&lt;span class="s2"&gt;"{id}-redis"&lt;/span&gt;, &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;, &lt;span class="nv"&gt;props&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;props&lt;span class="o"&gt;)&lt;/span&gt;
        redis_stack.add_dependency&lt;span class="o"&gt;(&lt;/span&gt;vpc_stack&lt;span class="o"&gt;)&lt;/span&gt;
        redis_stack.add_dependency&lt;span class="o"&gt;(&lt;/span&gt;alb_stack&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  VPC_STACK
&lt;/h3&gt;

&lt;p&gt;This code produces a VPC with 2 public and private subnets each in 2 different AZs with single NAT gateway. Using L2 construct, less than 10 lines of code produces route tables, subnets, NAT gateway, IGW, VPC. It is that simple to produce produce a valid VPC for app development with less than 10 lines of codes. &lt;/p&gt;

&lt;p&gt;Notice that I have added &lt;code&gt;CfnOutput&lt;/code&gt; for cloudformation output for vpc id and output_props to pass around vpc resource I have created in this stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class VPCStack(core.NestedStack):

    def __init__(self, scope: core.Construct, id: str, env, props, **kwargs) -&amp;gt; None:
        super().__init__(scope, id, **kwargs)

        subnets = []

        public_subnet = ec2.SubnetConfiguration(
                           cidr_mask=24,
                           name=f"{id}-public",
                           subnet_type=ec2.SubnetType.PUBLIC
                       )

        private_subnet = ec2.SubnetConfiguration(
                           cidr_mask=24,
                           name=f"{id}-private",
                           subnet_type=ec2.SubnetType.PRIVATE
                       )
        subnets.append(public_subnet)
        subnets.append(private_subnet)

        # The code that defines your stack goes here
        vpc = ec2.Vpc(self, f"{id}",
                           cidr="172.0.0.0/16",
                           enable_dns_hostnames=True,
                           enable_dns_support=True,
                           nat_gateways=1,
                           nat_gateway_provider=ec2.NatProvider.gateway(),
                           max_azs=2,
                           subnet_configuration=subnets
                           )
        #Be aware that environment-agnostic stacks will be created with access to only 2 AZs, so to use more than 2 AZs, be sure to specify the account and region on your stack

        core.CfnOutput(self, "vpcid",
                       value=vpc.vpc_id)

        # Prepares output attributes to be passed into other stacks
        # In this case, it is our VPC and subnets.
        self.output_props = props.copy()
        self.output_props['vpc'] = vpc
        self.output_props['public_subnets'] = vpc.public_subnets
        self.output_props['private_subnets'] = vpc.private_subnets

        @property
    def outputs(self):
        return self.output_props
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ALB_stack
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;AutoScalingGroup&lt;/code&gt; requires &lt;code&gt;KEY_PAIR_NAME&lt;/code&gt; which you need to create on Console manually. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html" rel="noopener noreferrer"&gt;Amazon EC2 key pairs and Linux instances&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The stack creates following resources&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 instance role&lt;/li&gt;
&lt;li&gt;API AutoScalingGroup&lt;/li&gt;
&lt;li&gt;Worker AutoScalingGroup&lt;/li&gt;
&lt;li&gt;Security groups

&lt;ul&gt;
&lt;li&gt;api EC2 (public subnet)

&lt;ul&gt;
&lt;li&gt;allows ssh from public&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;worker EC2 (private subnet)

&lt;ul&gt;
&lt;li&gt;allows ssh from &lt;code&gt;API EC2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;redis

&lt;ul&gt;
&lt;li&gt;allows 6379 from EC2 security group&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;rds

&lt;ul&gt;
&lt;li&gt;allows 3306 from EC2 security group&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;We need add security group on props to use them on RDS and Redis stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ALBStack(core.NestedStack):

    def __init__(self, scope: core.Construct, id: str, env, props, **kwargs) -&amp;gt; None:
        super().__init__(scope, id, **kwargs)

        #create ec2role
        #get from env or create
        role = iam.Role(self, "ecs-sample-ec2-role",
                        assumed_by=iam.ServicePrincipal('ec2.amazonaws.com'),
                        )
        role.add_managed_policy(
            iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEC2ContainerRegistryPowerUser")
        )

        asg_api = autoscaling.AutoScalingGroup(
            self,
            "ecs-sample-api-asg",
            vpc=props['vpc'],
            instance_type=ec2.InstanceType.of(
                ec2.InstanceClass.BURSTABLE3, ec2.InstanceSize.MICRO
            ),
            machine_image=ec2.AmazonLinuxImage(),
            key_name=KEY_PAIR_NAME,
            vpc_subnets=ec2.SubnetSelection(subnet_type=SubnetType.PUBLIC),
            desired_capacity=1,
            max_capacity=1,
            min_capacity=1,
            role=role
            # userdata=userdata

        )

        asg_worker = autoscaling.AutoScalingGroup(
            self,
            "ecs-sample-worker-asg",
            vpc=props['vpc'],
            instance_type=ec2.InstanceType.of(
                ec2.InstanceClass.BURSTABLE3, ec2.InstanceSize.MICRO
            ),
            machine_image=ec2.AmazonLinuxImage(),
            key_name=f"ecs-sample-key",
            vpc_subnets=ec2.SubnetSelection(subnet_type=SubnetType.PRIVATE),
            desired_capacity=1,
            max_capacity=1,
            min_capacity=1,
            role=role
            # userdata=userdata

        )

        # Creates a security group for our application
        sg_api = ec2.SecurityGroup(
                self,
                id="ecs-sample-ec2-api",
                vpc=props['vpc'],
                security_group_name="ecs-sample-ec2-api"
        )
        sg_worker = ec2.SecurityGroup(
                self,
                id="ecs-sample-ec2-worker",
                vpc=props['vpc'],
                security_group_name="ecs-sample-ec2-worker"
        )

        # to access this security group for SSH
        sg_api.add_ingress_rule(
            peer=ec2.Peer.ipv4(SSH_IP),
            connection=ec2.Port.tcp(22)
        )

        # use ec2 api as bastion
        sg_worker.connections.allow_from(
                sg_api, ec2.Port.tcp(22), "Allow from ec2 api")

        asg_api.add_security_group(sg_api)
        asg_worker.add_security_group(sg_worker)

        # Creates a security group for the application load balancer
        sg_alb = ec2.SecurityGroup(
                self,
                id="ecs-sample-loadbalancer",
                vpc=props['vpc'],
                security_group_name="ecs-sample-loadbalancer"
        )

        sg_api.connections.allow_from(
                sg_alb, ec2.Port.tcp(80), "Ingress")

        # Creates an application load balance
        lb = elbv2.ApplicationLoadBalancer(
                self,
                f"{id}-ALB",
                vpc=props['vpc'],
                security_group=sg_alb,
                internet_facing=True)

        listener = lb.add_listener("Listener", port=80)
        # Adds the autoscaling group's (asg_api) instance to be registered
        # as targets on port 8080
        listener.add_targets("Target", port=80, targets=[asg_api])
        # This creates a "0.0.0.0/0" rule to allow every one to access the
        # application
        listener.connections.allow_default_port_from_any_ipv4(
                "Open to the world"
                )

        # create RDS sg
        sg_rds = ec2.SecurityGroup(
                self,
                id="ecs-sample-mysql",
                vpc=props['vpc'],
                security_group_name="ecs-sample-mysql"
        )
        sg_api.connections.allow_from(
                sg_rds, ec2.Port.tcp(3306), "allow from rds to ec2 api")
        sg_rds.connections.allow_from(
                sg_api, ec2.Port.tcp(3306), "allow from ec2 api to rds")
        sg_worker.connections.allow_from(
                sg_rds, ec2.Port.tcp(3306), "allow from rds to ec2 worker")
        sg_rds.connections.allow_from(
                sg_worker, ec2.Port.tcp(3306), "allow from ec2 worker to rds")

        # create Redis SG
        sg_redis = ec2.SecurityGroup(
                self,
                id="ecs-sample-redis",
                vpc=props['vpc'],
                security_group_name="ecs-sample-redis"
        )
        sg_api.connections.allow_from(
                sg_rds, ec2.Port.tcp(6379), "allow from redis to ec2 api")
        sg_rds.connections.allow_from(
                sg_api, ec2.Port.tcp(6379), "allow from ec2 api to redis")
        sg_worker.connections.allow_from(
                sg_rds, ec2.Port.tcp(6379), "allow from redis to ec2 worker")
        sg_rds.connections.allow_from(
                sg_worker, ec2.Port.tcp(6379), "allow from ec2 worker to redis")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  RDS stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You can either choose cluster mode or instance mode by passing &lt;code&gt;cluster&lt;/code&gt; parameter on the stack argument&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You cannot pass around plaintext credentials so CDK enforces you to use &lt;code&gt;Credential&lt;/code&gt; This code actually produces password in &lt;code&gt;SecertManager&lt;/code&gt; (These are L2 contruct that CDK enforces for best practice)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-rds.Credentials.html" rel="noopener noreferrer"&gt;class Credentials · AWS CDK&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;class RDSStack&lt;span class="o"&gt;(&lt;/span&gt;core.NestedStack&lt;span class="o"&gt;)&lt;/span&gt;:

    def __init__&lt;span class="o"&gt;(&lt;/span&gt;self, scope: core.Construct, &lt;span class="nb"&gt;id&lt;/span&gt;: str, &lt;span class="nb"&gt;env&lt;/span&gt;, props, &lt;span class="nv"&gt;cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;False, &lt;span class="k"&gt;**&lt;/span&gt;kwargs&lt;span class="o"&gt;)&lt;/span&gt; -&amp;gt; None:
        super&lt;span class="o"&gt;()&lt;/span&gt;.__init__&lt;span class="o"&gt;(&lt;/span&gt;scope, &lt;span class="nb"&gt;id&lt;/span&gt;, &lt;span class="k"&gt;**&lt;/span&gt;kwargs&lt;span class="o"&gt;)&lt;/span&gt;

        &lt;span class="c"&gt;#TEMP without ASG&lt;/span&gt;
        &lt;span class="c"&gt;# security_groups = [ec2.SecurityGroup(&lt;/span&gt;
        &lt;span class="c"&gt;#         self,&lt;/span&gt;
        &lt;span class="c"&gt;#         id="ecs-sample-mysql",&lt;/span&gt;
        &lt;span class="c"&gt;#         vpc=props['vpc'],&lt;/span&gt;
        &lt;span class="c"&gt;#         security_group_name="ecs-sample-mysql"&lt;/span&gt;
        &lt;span class="c"&gt;# )]&lt;/span&gt;

        vpc &lt;span class="o"&gt;=&lt;/span&gt; props[&lt;span class="s1"&gt;'vpc'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
        &lt;span class="nv"&gt;security_groups&lt;/span&gt;&lt;span class="o"&gt;=[&lt;/span&gt;props[&lt;span class="s1"&gt;'sg_rds'&lt;/span&gt;&lt;span class="o"&gt;]]&lt;/span&gt;
        credential &lt;span class="o"&gt;=&lt;/span&gt; rds.Credentials.from_username&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"admin"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        private_subnet_selections &lt;span class="o"&gt;=&lt;/span&gt; ec2.SubnetSelection&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;subnet_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ec2.SubnetType.PRIVATE&lt;span class="o"&gt;)&lt;/span&gt;
        subnet_group &lt;span class="o"&gt;=&lt;/span&gt; rds.SubnetGroup&lt;span class="o"&gt;(&lt;/span&gt;self, &lt;span class="s2"&gt;"sample-rds-subnet-group"&lt;/span&gt;,
                                       &lt;span class="nv"&gt;vpc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vpc,
                                       &lt;span class="nv"&gt;subnet_group_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sample-rds-subnet-group"&lt;/span&gt;,
                                       &lt;span class="nv"&gt;vpc_subnets&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;private_subnet_selections,
                                       &lt;span class="nv"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sample-rds-subnet-group"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        self.output_props &lt;span class="o"&gt;=&lt;/span&gt; props.copy&lt;span class="o"&gt;()&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;not cluster:
            rds_instance &lt;span class="o"&gt;=&lt;/span&gt; rds.DatabaseInstance&lt;span class="o"&gt;(&lt;/span&gt;
                self, &lt;span class="s2"&gt;"RDS-instance"&lt;/span&gt;,
                &lt;span class="nv"&gt;database_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sample"&lt;/span&gt;,
                &lt;span class="nv"&gt;engine&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rds.DatabaseInstanceEngine.mysql&lt;span class="o"&gt;(&lt;/span&gt;
                    &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rds.MysqlEngineVersion.VER_8_0_16
                &lt;span class="o"&gt;)&lt;/span&gt;,
                &lt;span class="nv"&gt;credentials&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;credential,
                &lt;span class="nv"&gt;instance_identifier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ecs-sample-db"&lt;/span&gt;,

                &lt;span class="nv"&gt;vpc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vpc,
                &lt;span class="nv"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3306,
                &lt;span class="nv"&gt;instance_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ec2.InstanceType.of&lt;span class="o"&gt;(&lt;/span&gt;
                    ec2.InstanceClass.BURSTABLE3,
                    ec2.InstanceSize.MICRO,
                &lt;span class="o"&gt;)&lt;/span&gt;,
                &lt;span class="nv"&gt;subnet_group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;subnet_group,
                &lt;span class="nv"&gt;vpc_subnets&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;private_subnet_selections,
                &lt;span class="nv"&gt;removal_policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;core.RemovalPolicy.DESTROY,
                &lt;span class="nv"&gt;deletion_protection&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;False,
                &lt;span class="nv"&gt;security_groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;security_groups

            &lt;span class="o"&gt;)&lt;/span&gt;
            core.CfnOutput&lt;span class="o"&gt;(&lt;/span&gt;self, &lt;span class="s2"&gt;"RDS_instnace_endpoint"&lt;/span&gt;, &lt;span class="nv"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rds_instance.db_instance_endpoint_address&lt;span class="o"&gt;)&lt;/span&gt;
            self.output_props[&lt;span class="s1"&gt;'rds'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; rds_instance
            &lt;span class="k"&gt;else&lt;/span&gt;:
            instance_props &lt;span class="o"&gt;=&lt;/span&gt; rds.InstanceProps&lt;span class="o"&gt;(&lt;/span&gt;
                &lt;span class="nv"&gt;vpc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vpc,
                &lt;span class="nv"&gt;security_groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;security_groups,
                &lt;span class="nv"&gt;vpc_subnets&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;private_subnet_selections
            &lt;span class="o"&gt;)&lt;/span&gt;
            rds_cluster &lt;span class="o"&gt;=&lt;/span&gt; rds.DatabaseCluster&lt;span class="o"&gt;(&lt;/span&gt;
                self, &lt;span class="s2"&gt;"RDS-cluster"&lt;/span&gt;,
                &lt;span class="nv"&gt;cluster_identifier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ecs-sample-db-cluster"&lt;/span&gt;,
                &lt;span class="nv"&gt;instance_props&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;instance_props,
                &lt;span class="nv"&gt;engine&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rds.DatabaseClusterEngine.aurora_mysql&lt;span class="o"&gt;(&lt;/span&gt;
                    &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rds.AuroraMysqlEngineVersion.VER_2_07_1
                &lt;span class="o"&gt;)&lt;/span&gt;,
                &lt;span class="nv"&gt;credentials&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;credential,
                &lt;span class="nv"&gt;default_database_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sample"&lt;/span&gt;,
                &lt;span class="nv"&gt;instances&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1,
                &lt;span class="nv"&gt;subnet_group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;subnet_group,
                &lt;span class="nv"&gt;removal_policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;core.RemovalPolicy.DESTROY,
                &lt;span class="nv"&gt;deletion_protection&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;False
            &lt;span class="o"&gt;)&lt;/span&gt;
            core.CfnOutput&lt;span class="o"&gt;(&lt;/span&gt;self, &lt;span class="s2"&gt;"RDS_cluster_endpoint"&lt;/span&gt;, &lt;span class="nv"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rds_cluster.cluster_endpoint.hostname&lt;span class="o"&gt;)&lt;/span&gt;
            self.output_props[&lt;span class="s1"&gt;'rds'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; rds_cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Redis Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It seems like Redis ReplicationGroup only supports L1 construct currently. There for I have used &lt;code&gt;CfnReplicationGroup&lt;/code&gt; in order to create Elasticache for Redis.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis = cache.CfnReplicationGroup(self,
                                              f"{id}-replication-group",
                                              replication_group_description=f"{id}-replication group",
                                              cache_node_type="cache.t3.micro",
                                              cache_parameter_group_name=cache_parameter_group_name,
                                              security_group_ids=[sg_redis.security_group_id],
                                              cache_subnet_group_name=subnets_group.cache_subnet_group_name,
                                              engine="redis",
                                              engine_version="5.0.4",
                                              # node_group_configuration
                                              num_node_groups=1, #shard
                                              replicas_per_node_group=1 #one replica
                                              )
        redis.add_depends_on(subnets_group)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Deploy
&lt;/h1&gt;

&lt;p&gt;Let's deploy by typing two commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bootstrap
&lt;/h3&gt;

&lt;p&gt;This command will create a bucket for CDK to stage cloudformation files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk bootstrap
&lt;span class="c"&gt;#check cloudformation for CDKtoolkit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fou032ase1sl5w14kgbpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fou032ase1sl5w14kgbpv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftn6btxnqpin4whvnd30z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftn6btxnqpin4whvnd30z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploy
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You will be able to monitor nested stacks being deployed on shell. You can also monitor detailed progress on CloudFormation console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftv5zgod7ns2cktxj666b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftv5zgod7ns2cktxj666b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmjj5fve7dsx6w0k34ayc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmjj5fve7dsx6w0k34ayc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you get any errors, go to CloudFormation stack on console to review events tab to look for reason for failure. &lt;/p&gt;
&lt;h3&gt;
  
  
  Destroy
&lt;/h3&gt;

&lt;p&gt;once you have finished working with the sample you can easily destroy with single command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or just delete the stack in CloudFormation console.&lt;/p&gt;

&lt;h1&gt;
  
  
  Takeaway
&lt;/h1&gt;

&lt;p&gt;I have reduced 900 lines of json code into python code with less than 500 lines of code. Even more, they are modular and easy to revise whenever necessary. AWS CDK enables you to boost your productivity on IaC. If you have been using CloudFormation, you should definitely try out AWS CDK. &lt;/p&gt;

&lt;p&gt;Take advantage of IDE!&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/aws-cloud-development-kit-cdk-typescript-and-python-are-now-generally-available/" rel="noopener noreferrer"&gt;AWS Cloud Development Kit (CDK) - TypeScript and Python are Now Generally Available | Amazon Web Services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/examples.html" rel="noopener noreferrer"&gt;Examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws-samples/aws-cdk-examples#Python" rel="noopener noreferrer"&gt;aws-samples/aws-cdk-examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws-samples/startup-kit-templates" rel="noopener noreferrer"&gt;aws-samples/startup-kit-templates&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cdkworkshop.com/30-python/20-create-project.html" rel="noopener noreferrer"&gt;cdkworkshop.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/api/latest/" rel="noopener noreferrer"&gt;AWS CDK · AWS CDK Reference Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://levelup.gitconnected.com/aws-cdk-for-beginners-e6c05ad91895" rel="noopener noreferrer"&gt;AWS CDK for Beginners&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Source
&lt;/h1&gt;

&lt;p&gt;This article was originally posted on &lt;a href="https://blog.kokospapa.com/AWS-CDK-use-your-favorite-language-to-define-cloud-infrastructure-9575fc56e7924981823d1878417c9642" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudformation</category>
      <category>iac</category>
      <category>awscdk</category>
    </item>
    <item>
      <title>Migrate python async worker to asynchrounous Lambda</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Sat, 10 Oct 2020 06:44:56 +0000</pubDate>
      <link>https://dev.to/kokospapa8/migrate-python-async-worker-to-asynchrounous-lambda-8k3</link>
      <guid>https://dev.to/kokospapa8/migrate-python-async-worker-to-asynchrounous-lambda-8k3</guid>
      <description>&lt;p&gt;&lt;a href="https://blog.kokospapa.com/Migrate-python-async-worker-to-asynchrounous-Lambda-36a0551c835b4bd1b6559b0d9450bb56" rel="noopener noreferrer"&gt;Original blog post&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;When you build a web service, there is a time that you need to implement simple async worker for background process. I am big fan of Django and whenever I need a async worker, I used &lt;code&gt;celery&lt;/code&gt; or &lt;code&gt;rq&lt;/code&gt; for convenience. &lt;code&gt;celery&lt;/code&gt; or other async framework provides a lot of cool features but most of the time all I need is a simple background process that would not interrupt user experiences.&lt;/p&gt;

&lt;p&gt;Overtime, as paradigm shifts to cloud, I have been thinking lambda as new async worker solution due to deployment complexity, scaling and cost constraint. Here are pros and cons for using lambda instead of &lt;code&gt;celery&lt;/code&gt; on cloud environment(non-local environment).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9to8z3plvgbfouf085ea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9to8z3plvgbfouf085ea.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantage
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;FasS advantage

&lt;ul&gt;
&lt;li&gt;scaling benefit - cost, ops, etc&lt;/li&gt;
&lt;li&gt;Resiliency&lt;/li&gt;
&lt;li&gt;faster development&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;No need to maintain Broker (e.g elastic search redis)&lt;/li&gt;

&lt;li&gt;Smaller side effect on code change&lt;/li&gt;

&lt;li&gt;Fully-managed&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Disadvantage
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;FasS disadvantage

&lt;ul&gt;
&lt;li&gt;limited state&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Hard to reuse app code

&lt;ul&gt;
&lt;li&gt;Django ORM or settings module&lt;/li&gt;
&lt;li&gt;You can still use them with django package installed but inefficient&lt;/li&gt;
&lt;li&gt;can't reuse code from django app&lt;/li&gt;
&lt;li&gt;Django package is too heavy to use on lambda (still usable but too complex to setup&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Latency on startup&lt;/li&gt;

&lt;li&gt;Json formatted Invocation parameter

&lt;ul&gt;
&lt;li&gt;If you need to pass binary or big size paras, need to consider S3 or other medium&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Limitation
&lt;/h3&gt;

&lt;p&gt;Most of early stage limitation are lifted as lambda is in mature state (e.g conccurent executions, layers, etc)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html" rel="noopener noreferrer"&gt;AWS Lambda quotas&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;timeout - 15min

&lt;ul&gt;
&lt;li&gt;If your async worker take more than 5min, you need to break up codes in to multiple lambda functions or use other distributed system such as EMR, etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Invocation payload

&lt;ul&gt;
&lt;li&gt;asynchronous - &lt;strong&gt;256KB&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Consideration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;VPC, Subnet, Security Group

&lt;ul&gt;
&lt;li&gt;If you are accessing Resources in private VPC, you need to consider placing lambda in same subnet.&lt;/li&gt;
&lt;li&gt;If you are lambda needs to talk to outside world, private subnets needs NAT gateway&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Maintenance&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Setup
&lt;/h1&gt;

&lt;p&gt;AWS recently came up with SAM(Serverless Application Model) which utilize cloudformation for easier deployment of lambda application. SAM provides functionality such as API gateway and statemachine, but we will only be using &lt;code&gt;AWS::Serverless::Function&lt;/code&gt; resource for our purpose. You can consider using &lt;code&gt;Zappa&lt;/code&gt; for other option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Github
&lt;/h2&gt;

&lt;p&gt;refer to this repo for details&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kokospapa8/async-lambda-sample-app" rel="noopener noreferrer"&gt;kokospapa8/async-lambda-sample-app&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;p&gt;You need following AWS resource created in order to invoke asynchronous lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  DLQ
&lt;/h3&gt;

&lt;p&gt;In order use &lt;code&gt;asynchronous lambda&lt;/code&gt;, you need to provide DLQ in case lambda fails and retry. Click link for details. I will create SNS topic in this post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyun3xmo8yu0m3wius0nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyun3xmo8yu0m3wius0nw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#dlq" rel="noopener noreferrer"&gt;Asynchronous invocation&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  SECRET
&lt;/h3&gt;

&lt;p&gt;Let's assume that you have mysql db in public VPC or s3 and need to access them in order to process some data, you can add secrets to lambda env, but it is not a good idea to put secret in plaintext for env variable. You can use &lt;code&gt;aws secret manger&lt;/code&gt; , &lt;code&gt;ssm parameterstore&lt;/code&gt; or &lt;code&gt;vault&lt;/code&gt; . I will use &lt;code&gt;ssm parameterstore&lt;/code&gt; for this post. Refer to previous post for creating secrets.(parameter store section)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.notion.so/How-to-deploy-django-app-to-ECS-Fargate-part3-13875ffa0c7e4e1da3d1c46cad0bae94" rel="noopener noreferrer"&gt;How to deploy django app to ECS Fargate part3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;arn:aws:ssm:&amp;lt;region&amp;gt;:&amp;lt;account_id&amp;gt;:parameter/SAMPLE/BUCKET&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo3do07z9o149wb2i24l7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo3do07z9o149wb2i24l7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  IAM ROLE
&lt;/h3&gt;

&lt;p&gt;You need lambda execution role with following permission and polcies attached.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

Policies
- AWSLambdaExecute
Permission 
- SNS – sns:Publish
- kms - kms:Decrypt
- ssm - ssm:GetParameters, ssm:DescribeParameters, ssm:GetParameter
You need following permission if you want to place your lambda function in a designated VPC
- ec2 - ec2:CreateNetworkInterface, ec2:DeleteNetworkInterface, ec2:DescribeSecurityGroups


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  SAM
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html" rel="noopener noreferrer"&gt;What is the AWS Serverless Application Model (AWS SAM)?&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Install (on macOS)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-mac.html" rel="noopener noreferrer"&gt;Installing the AWS SAM CLI on macOS&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws configure
AWS Access Key ID &lt;span class="o"&gt;[&lt;/span&gt;None]: your_access_key_id
AWS Secret Access Key &lt;span class="o"&gt;[&lt;/span&gt;None]: your_secret_access_key
Default region name &lt;span class="o"&gt;[&lt;/span&gt;None]: 
Default output format &lt;span class="o"&gt;[&lt;/span&gt;None]:

&lt;span class="nv"&gt;$ &lt;/span&gt;/bin/bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/Homebrew/install/master/install.sh&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;brew tap aws/tap
&lt;span class="nv"&gt;$ &lt;/span&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;aws-sam-cli


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Development
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;sam init


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff856gvse9rpaspn55wgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff856gvse9rpaspn55wgt.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This code will create following structure &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

sam-app/
   ├── README.md
   ├── events/
   │   └── event.json
   ├── hello_world/
   │   ├── __init__.py
   │   ├── app.py            &lt;span class="c"&gt;#Contains your AWS Lambda handler logic.&lt;/span&gt;
   │   └── requirements.txt  &lt;span class="c"&gt;#Contains any Python dependencies the application requires, used for sam build&lt;/span&gt;
   ├── template.yaml         &lt;span class="c"&gt;#Contains the AWS SAM template defining your application's AWS resources.&lt;/span&gt;
   └── tests/
       └── unit/
           ├── __init__.py
           └── test_handler.py


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  template.yaml
&lt;/h3&gt;

&lt;p&gt;Update template file if you need more info on syntax, refer to the link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html" rel="noopener noreferrer"&gt;AWS::Serverless::Function&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;template.yaml
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWSTemplateFormatVersion: '2010-09-09'&lt;br&gt;
Transform: AWS::Serverless-2016-10-31&lt;br&gt;
Description: &amp;gt;&lt;br&gt;
  sample-app&lt;br&gt;
  Sample SAM Template for sample-app&lt;/p&gt;
&lt;h1&gt;
  
  
  More info about Globals: &lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst" rel="noopener noreferrer"&gt;https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Globals:&lt;br&gt;
  Function:&lt;br&gt;
    Timeout: 3&lt;/p&gt;

&lt;p&gt;Resources:&lt;br&gt;
  SampleAppdFunction:&lt;br&gt;
    Type: AWS::Serverless::Function # More info about Function Resource: &lt;a href="https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction" rel="noopener noreferrer"&gt;https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction&lt;/a&gt;&lt;br&gt;
    Properties:&lt;br&gt;
      CodeUri: sample_app/&lt;br&gt;
      DeadLetterQueue:&lt;br&gt;
        Type: SNS&lt;br&gt;
        TargetArn: arn:aws:sns:::sample-dlq&lt;br&gt;
      Handler: app.lambda_handler&lt;br&gt;
      Runtime: python3.8&lt;br&gt;
      Description: sample lambda&lt;br&gt;
      EventInvokeConfig:&lt;br&gt;
        MaximumEventAgeInSeconds: 60&lt;br&gt;
        MaximumRetryAttempts: 2&lt;br&gt;
      FunctionName: SampleApp&lt;br&gt;
      Role: arn:aws:iam:::role/sample_lambda_execution_role&lt;br&gt;
      Environment:&lt;br&gt;
        Variables:&lt;br&gt;
          S3_URL: ""&lt;/p&gt;

&lt;p&gt;Outputs:&lt;br&gt;
  SampleAppdFunction:&lt;br&gt;
    Description: "Sample app Function ARN"&lt;br&gt;
    Value: !GetAtt SampleAppdFunction.Arn&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;boto3&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- app.py
This sample app receives image_url as parameter and puts the image on s3bucket name provided on s3 bucket. (AWSLambdaExecute policy has s3 access permission)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;import boto3&lt;br&gt;
import os&lt;br&gt;
import requests&lt;/p&gt;

&lt;p&gt;def lambda_handler(event, context):&lt;br&gt;
    image_url = event['image_url']&lt;br&gt;
    S3_BUCKET_PARAM = os.environ['S3_BUCKET_PARAM']&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# download

image = requests.get(image_url).content


ssm_client = boto3.client('ssm', region_name="ap-northeast-2")
response = ssm_client.get_parameter(
    Name=S3_BUCKET_PARAM,
    WithDecryption=True
)
bucket_name = response['Parameter']['Value']

s3_client = boto3.client('s3', region_name="ap-northeast-2")
response = s3_client.put_object(
    Bucket=bucket_name,
    Key="image.png",
    Body=image
)

return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
## Build

Once your logic code is ready, you can build it using following command 

```yaml


sam build -t template.yaml --region &amp;lt;region_name&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can test your function with following command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;sam &lt;span class="nb"&gt;local &lt;/span&gt;invoke
&lt;span class="c"&gt;# if you need to provide environment variable or paramters, use following command&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;sam &lt;span class="nb"&gt;local &lt;/span&gt;invoke &lt;span class="nt"&gt;-e&lt;/span&gt; events/event.json &lt;span class="nt"&gt;--env-vars&lt;/span&gt; env.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;#env.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SampleAppdFunction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"S3_BUCKET_PARAM_ARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"parameterstore_arn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;#event.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"image_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://picsum.photos/200/300"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;Once your function is tested locally, let's deploy the function to cloud. You have two options. Enter appropriate response to prompt&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;sam deploy &lt;span class="nt"&gt;--guided&lt;/span&gt;

Configuring SAM deploy
&lt;span class="o"&gt;======================&lt;/span&gt;

    Looking &lt;span class="k"&gt;for &lt;/span&gt;samconfig.toml :  Found
    Reading default arguments  :  Success

    Setting default arguments &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s1"&gt;'sam deploy'&lt;/span&gt;
    &lt;span class="o"&gt;=========================================&lt;/span&gt;
    Stack Name &lt;span class="o"&gt;[&lt;/span&gt;sample-app]: y
    AWS Region &lt;span class="o"&gt;[&lt;/span&gt;ap-northeast-2]: y
    &lt;span class="c"&gt;#Shows you resources changes to be deployed and require a 'Y' to initiate deploy&lt;/span&gt;
    Confirm changes before deploy &lt;span class="o"&gt;[&lt;/span&gt;Y/n]: y
    &lt;span class="c"&gt;#SAM needs permission to be able to create roles to connect to the resources in your template&lt;/span&gt;
    Allow SAM CLI IAM role creation &lt;span class="o"&gt;[&lt;/span&gt;Y/n]: y
    Save arguments to samconfig.toml &lt;span class="o"&gt;[&lt;/span&gt;Y/n]: y



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;output should look like this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


    Looking for resources needed for deployment: Found!

        Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-fdwl3buruuk1
        A different default S3 bucket can be set in samconfig.toml

    Deploying with following values
    ===============================
    Stack name                 : sample-app
    Region                     : ap-northeast-2
    Confirm changeset          : True
    Deployment s3 bucket       : aws-sam-cli-managed-default-samclisourcebucket-fdwl3buruuk1
    Capabilities               : ["CAPABILITY_IAM"]
    Parameter overrides        : {}

Initiating deployment
=====================

    Saved arguments to config file
    Running 'sam deploy' for future deployments will use the parameters saved above.
    The above parameters can be changed by modifying samconfig.toml
    Learn more about samconfig.toml syntax at 
    https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html

    Deploying with following values
    ===============================
    Stack name                 : sample-app
    Region                     : ap-northeast-2
    Confirm changeset          : True
    Deployment s3 bucket       : aws-sam-cli-managed-default-samclisourcebucket-fdwl3buruuk1
    Capabilities               : ["CAPABILITY_IAM"]
    Parameter overrides        : {}

Initiating deployment
=====================

Waiting for changeset to be created..

CloudFormation stack changeset
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Operation                                                    LogicalResourceId                                            ResourceType                                               
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Add                                                        SampleAppdFunctionEventInvokeConfig                          AWS::Lambda::EventInvokeConfig                             
+ Add                                                        SampleAppdFunction                                           AWS::Lambda::Function                                      
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Changeset created successfully. arn:aws:cloudformation:ap-northeast-2:982947632035:changeSet/samcli-deploy1602235720/1bb1b8a3-da32-4de9-9a14-49aaae3f066f


Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]: y

2020-10-09 18:29:04 - Waiting for stack create/update to complete

CloudFormation events from changeset
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ResourceStatus                                ResourceType                                  LogicalResourceId                             ResourceStatusReason                        
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CREATE_IN_PROGRESS                            AWS::Lambda::Function                         SampleAppdFunction                            -                                           
CREATE_IN_PROGRESS                            AWS::Lambda::Function                         SampleAppdFunction                            Resource creation Initiated                 
CREATE_COMPLETE                               AWS::Lambda::Function                         SampleAppdFunction                            -                                           
CREATE_IN_PROGRESS                            AWS::Lambda::EventInvokeConfig                SampleAppdFunctionEventInvokeConfig           -                                           
CREATE_IN_PROGRESS                            AWS::Lambda::EventInvokeConfig                SampleAppdFunctionEventInvokeConfig           Resource creation Initiated                 
CREATE_COMPLETE                               AWS::Lambda::EventInvokeConfig                SampleAppdFunctionEventInvokeConfig           -                                           
CREATE_COMPLETE                               AWS::CloudFormation::Stack                    sample-app                                    -                                           
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

CloudFormation outputs from deployed stack
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Outputs                                                                                                                                                                              
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Key                 SampleAppdFunction                                                                                                                                               
Description         Hello World Lambda Function ARN                                                                                                                                  
Value               arn:aws:lambda:ap-northeast-2:982947632035:function:SampleApp                                                                                                    
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Successfully created/updated stack - sample-app in ap-northeast-2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Check cloudformation and lambda console for successful deployment.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh4rcy28q0eaflxwf1s61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh4rcy28q0eaflxwf1s61.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm6swx0h8t5f3r9v9dvzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm6swx0h8t5f3r9v9dvzc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Invoke function in lambda console
&lt;/h2&gt;

&lt;p&gt;Once Lambda is deployed, you can create test event in console.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbqdo1i2rbf6b8b1yoy8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbqdo1i2rbf6b8b1yoy8g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
You will get successful log data.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3qbsa46a5g8a0p47wo4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3qbsa46a5g8a0p47wo4c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Invoke from python app
&lt;/h3&gt;

&lt;p&gt;Now you need to invoke lambda function from your app code. Here are examples you can use.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;invoke_from_app.py&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;boto&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;json&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;SETTINGS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"AWS_DEFAULT_REGION"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ap-northeast-2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ENV"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"prod"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;main():&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;lambda_client&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;boto&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;.client('lambda',&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;region_name=SETTINGS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;'AWS_DEFAULT_REGION'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;payload&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"image_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://picsum.photos/200/300"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;ret&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;lambda_client.invoke(&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;FunctionName=&lt;/span&gt;&lt;span class="s2"&gt;"SampleApp"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;InvocationType=&lt;/span&gt;&lt;span class="s2"&gt;"DryRun"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;SETTINGS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;'ENV'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;else&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Event"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;Payload=json.dumps(payload)&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;print(ret)&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;__name__&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"__main__"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;main()&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Monitoring
&lt;/h1&gt;

&lt;p&gt;Celery or rq provides native or 3rd party too for monitoring such as &lt;code&gt;sentry&lt;/code&gt; . There are some options for monitoring lambda functions but SAM application also provides minimal monitoring  environment. Go to lambda service and &lt;code&gt;application&lt;/code&gt; menu. Select &lt;code&gt;Monitoring&lt;/code&gt; tab to dashboard and cloudwatch logs. You can also configure &lt;code&gt;x-ray&lt;/code&gt; for tracing.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0e5hcdaio3yixo6p228j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0e5hcdaio3yixo6p228j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc84lrxlh4gfi1ecua5m7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc84lrxlh4gfi1ecua5m7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  CD
&lt;/h2&gt;

&lt;p&gt;Let's talk about CD pipeline, I will use conditional trigger on github workflow and run the &lt;/p&gt;
&lt;h3&gt;
  
  
  Github workflow
&lt;/h3&gt;

&lt;p&gt;following workflow file will build and deploy to lambda. You need to set secret key with AWS credential.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ax8k81yxhib2a5x2xqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1ax8k81yxhib2a5x2xqc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;

&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Lambda deployment&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Lambda deploy&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS credentials&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ap-northeast-2&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build, and deploy&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"&lt;/span&gt;
        &lt;span class="s"&gt;test -d ~/.linuxbrew &amp;amp;&amp;amp; eval $(~/.linuxbrew/bin/brew shellenv)&lt;/span&gt;
        &lt;span class="s"&gt;test -d /home/linuxbrew/.linuxbrew &amp;amp;&amp;amp; eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)&lt;/span&gt;
        &lt;span class="s"&gt;test -r ~/.bash_profile &amp;amp;&amp;amp; echo "eval \$($(brew --prefix)/bin/brew shellenv)" &amp;gt;&amp;gt;~/.bash_profile&lt;/span&gt;
        &lt;span class="s"&gt;brew --version&lt;/span&gt;
        &lt;span class="s"&gt;brew tap aws/tap&lt;/span&gt;
        &lt;span class="s"&gt;brew install aws-sam-cli&lt;/span&gt;
        &lt;span class="s"&gt;sam --version&lt;/span&gt;

        &lt;span class="s"&gt;sam build&lt;/span&gt;
        &lt;span class="s"&gt;sam deploy --stack-name sample-app --region ap-northeast-2 -t .aws-sam/build/template.yaml --capabilities CAPABILITY_IAM --no-confirm-changeset --s3-bucket aws-sam-cli-managed-default-samclisourcebucket-fdwl3buruuk1&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you want to implement canary or linear deployment using codedeploy, refer to the following link for more detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html" rel="noopener noreferrer"&gt;Deploying serverless applications gradually&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Once lambda is deployed, I removed 20% of my app code related to RQ. I also removed broker(Elasticsearch Redis) service anymore. Moreover since I was using ECS and EKS, I can remove additional task and pod for more api tasks and pods. I am pretty happy with the result that I don't need to manage another container on production. If you have simple workers running on RQ or Celery, I recommend that you try this setup. Thank you for reading.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>django</category>
    </item>
    <item>
      <title>How I prepared for GCP Professional Certificate Test</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Thu, 10 Sep 2020 05:06:43 +0000</pubDate>
      <link>https://dev.to/kokospapa8/how-i-prepared-for-gcp-professional-certificate-test-llf</link>
      <guid>https://dev.to/kokospapa8/how-i-prepared-for-gcp-professional-certificate-test-llf</guid>
      <description>&lt;p&gt;original blog &lt;a href="https://blog.kokospapa.com/How-I-prepared-for-GCP-Professional-Certificate-Test-122e36c34f474c65922f0bccd9639eaa"&gt;link&lt;/a&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;Cloud environment had exponential growth over the course (AWS didn't have RDS when I started) and I wanted to get a firm graps on the subject before venturing into new job. During covid-19 pandemic, I have racked up 4 &lt;strong&gt;professional&lt;/strong&gt; cloud certificates(2 AWS, 2 GCP). I am proud of my achievement. Being certified is rewarding experience, but the knowledge and exprience I gained was more rewarding. If you are interested preparing and taking the test will be great first step, if you are already experienced, this will harden your knowledge in every aspect.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where to start
&lt;/h1&gt;

&lt;p&gt;If you do not know where to start, I recommend you to start with coursera. You can get first week free and google provides 50% off on first month.  (refer to last section for the link)&lt;/p&gt;

&lt;h2&gt;
  
  
  Coursera
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/professional-certificates/cloud-engineering-gcp"&gt;Cloud Engineering with Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/professional-certificates/gcp-cloud-architect"&gt;Cloud Architecture with Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/professional-certificates/gcp-data-engineering"&gt;Data Engineering with Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Qwiklab
&lt;/h2&gt;

&lt;p&gt;Getting hands on is essential you make your knowledge tangible by actually working on the product. Google offers 30 day free subscriptions. I think the quest badge they offer is great example of gamification done right way. Learnig through qwiklab was really fun process for me. You can synthesize your knowledge as you get hands on with what you have learned through texts and slides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Badges
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.qwiklabs.com/public_profiles/7a8119ef-acf9-4590-aa98-87f67ceef467"&gt;Jinwook B. | Qwiklabs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kee6tsiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b9vwrzb6nmh61vncx04s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kee6tsiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b9vwrzb6nmh61vncx04s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you don't have much time you should at least go through following quests.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;For Cloud Architect&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.qwiklabs.com/quests/124"&gt;Cloud Architecture: Design, Implement, and Manage | Qwiklabs&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For Data Engineering&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.qwiklabs.com/quests/25"&gt;Data Engineering | Qwiklabs&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Practice exams
&lt;/h1&gt;

&lt;p&gt;Once you have firm knowledge it's time for practice exams. I used following courses and it helped me get a good sense how test questions will be formed and what kind of answers will be presented.  They don't cover all the topics, but you can definately figure out what you are missing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0GtdOhMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8tw770lgj6a87l0qfgf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0GtdOhMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8tw770lgj6a87l0qfgf4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Dig deeper into each subject
&lt;/h1&gt;

&lt;p&gt;By this time you are already an expert on several topics and can dive deeper into each topics. You know what you are looking for. These resource will help you get there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gregsramblings/google-cloud-4-words"&gt;gregsramblings/google-cloud-4-words&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/docs/compare/aws#service_comparisons"&gt;Google Cloud for AWS Professionals&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sathishvj/awesome-gcp-certifications"&gt;sathishvj/awesome-gcp-certifications&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Taking exam
&lt;/h1&gt;

&lt;p&gt;I won't go into details on how to schedule and make payment. Due to COVID-19, Google offers remote proctored exam for almost all the tests. It was really convenient for me to just take exam at my home. Of course it might better for some people if your house is not as quiet or secure place for  taking exam.&lt;/p&gt;

&lt;p&gt;Couple things I want to note &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can change your test time as long as it's 24 hours prior to the exam time. (they will charge you if you change time within 24hours&lt;/li&gt;
&lt;li&gt;Prepare for goverment photo ID (in English) - passport is gauranteed&lt;/li&gt;
&lt;li&gt;Have a good internet connection, it took me 30 min to start exam because my wifi was choppy. I had to go through ethernet and hotspot to get stable connection. It was exhausting process and by the time I start the test, I was already beat.&lt;/li&gt;
&lt;li&gt;Solve easier ones first. 120 min is enough time for 50 questions. Skip or mark for later if you are any doubt or don't under stand and comeback later.&lt;/li&gt;
&lt;li&gt;I will add more if I can think about&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Topics you should know
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Architect
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Case study

&lt;ul&gt;
&lt;li&gt;20% of the exam was Scenario-based - Case Studies (&lt;a href="https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames-rev2"&gt;Mountkirk&lt;/a&gt;, &lt;a href="https://cloud.google.com/certification/guides/cloud-architect/casestudy-dress4win-rev2"&gt;Dress4Win&lt;/a&gt; and &lt;a href="https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth-rev2"&gt;TerramEarth&lt;/a&gt;). They cover this on the official website and also on coursera&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Administratives

&lt;ul&gt;
&lt;li&gt;IAM

&lt;ul&gt;
&lt;li&gt;role&lt;/li&gt;
&lt;li&gt;policy, effective policy&lt;/li&gt;
&lt;li&gt;hierarchy&lt;/li&gt;
&lt;li&gt;binding&lt;/li&gt;
&lt;li&gt;service account&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Billing&lt;/li&gt;
&lt;li&gt;Audit log&lt;/li&gt;
&lt;li&gt;free quota&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Compute

&lt;ul&gt;
&lt;li&gt;difference between GCE, GKE, App engine, cloud Function&lt;/li&gt;
&lt;li&gt;app engine runtime for standard and flexible&lt;/li&gt;
&lt;li&gt;Basic Kubernetes concept

&lt;ul&gt;
&lt;li&gt;pod, services, ingress, etc&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Storage

&lt;ul&gt;
&lt;li&gt;persistent disks&lt;/li&gt;
&lt;li&gt;filestore&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Stackdriver&lt;/li&gt;
&lt;li&gt;Network

&lt;ul&gt;
&lt;li&gt;VPC, subnet, firewall&lt;/li&gt;
&lt;li&gt;Cloud DNS, VPN, Router&lt;/li&gt;
&lt;li&gt;Difference between Dedicated, partner interconnect&lt;/li&gt;
&lt;li&gt;Types of Loadbalancers&lt;/li&gt;
&lt;li&gt;Direct Peering vs Carrier Peering&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Deployment&lt;/li&gt;
&lt;li&gt;Database type by scale insurance, distribution, replica consistency, multi-primary, transacrtions, joins and complex queries, latency, serverless

&lt;ul&gt;
&lt;li&gt;Storage transfer services vs transfer appliance vs gsutil&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Know which resources are global, regional, zonal services&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/compute/docs/regions-zones/global-regional-zonal-resources#globalresources"&gt;Global, regional, and zonal resources | Compute Engine Documentation&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;ETC&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SRE

&lt;ul&gt;
&lt;li&gt;SLI, SLO, SLA&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Opex, capex, TCO&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Data Engineer
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Dataflow

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pardo&lt;/code&gt;, &lt;code&gt;transform&lt;/code&gt;, &lt;code&gt;pCollection&lt;/code&gt;, &lt;code&gt;pipeline&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Windowing

&lt;ul&gt;
&lt;li&gt;session, fixed, sliding, global&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;deduplication&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Bigquery

&lt;ul&gt;
&lt;li&gt;BQML&lt;/li&gt;
&lt;li&gt;Billing&lt;/li&gt;
&lt;li&gt;Permission&lt;/li&gt;
&lt;li&gt;query optimization&lt;/li&gt;
&lt;li&gt;Responsive, predictive cache&lt;/li&gt;
&lt;li&gt;Partition&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Dataproc

&lt;ul&gt;
&lt;li&gt;cluster resize, modification&lt;/li&gt;
&lt;li&gt;HDFS vs Cloudstorage&lt;/li&gt;
&lt;li&gt;storage connectors&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Dataprep

&lt;ul&gt;
&lt;li&gt;recipe&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;ML Engine&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Storage&lt;/p&gt;

&lt;p&gt;Must know storage limit, processing limit and NoOP or unmanaged service&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;datastore&lt;/li&gt;
&lt;li&gt;cloud spanner

&lt;ul&gt;
&lt;li&gt;key design&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;cloud sql&lt;/li&gt;
&lt;li&gt;bigtable

&lt;ul&gt;
&lt;li&gt;bigtable key design&lt;/li&gt;
&lt;li&gt;performance optimization&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;bigquery&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Basic ML knowledge&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overfitting/underfitting&lt;/li&gt;
&lt;li&gt;False positive, False Negative&lt;/li&gt;
&lt;li&gt;Regression, Classification, Clustering&lt;/li&gt;
&lt;li&gt;Supervised, unsupervised learning&lt;/li&gt;
&lt;li&gt;Recall, Precision, Accuracy&lt;/li&gt;
&lt;li&gt;Regularization&lt;/li&gt;
&lt;li&gt;AUC&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AutoML&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AppEngine&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud Composer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;pubsub&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;retention&lt;/li&gt;
&lt;li&gt;push,pull&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Offer for Coursera and Qwiklab
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://inthecloud.withgoogle.com/training-discount/register.html?utm_source=google&amp;amp;utm_medium=website&amp;amp;utm_campaign=FY20-Q1-global-trainingandenablement-operational-other-training_discount&amp;amp;utm_content=cgc-training"&gt;Google Cloud Training&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Word
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Don't just memorize to get correct answer, do enjoy the process of research. Knowledge is power. If you have questions and curious about something dig deeper into the product and make it yours.&lt;/li&gt;
&lt;li&gt;Qwiklab rules!&lt;/li&gt;
&lt;li&gt;Good luck!&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googlecloud</category>
      <category>certificate</category>
    </item>
    <item>
      <title>Gunicorn performance analysis on AWS EC2</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Mon, 27 Jul 2020 13:18:01 +0000</pubDate>
      <link>https://dev.to/kokospapa8/gunicorn-performance-analysis-on-aws-ec2-28jl</link>
      <guid>https://dev.to/kokospapa8/gunicorn-performance-analysis-on-aws-ec2-28jl</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;There are several production level self-hosted options for running django app server behind nginx web sever using wsgi protocol, such as &lt;code&gt;uWsgi&lt;/code&gt;,  &lt;code&gt;mod_wsgi&lt;/code&gt; and &lt;code&gt;gunicorn&lt;/code&gt;. I have been using gunicorn for many projects for mayn years. Since then I have never doubted the performance and configuration details but recently I wanted to test the actual performance for gunicorn associated with worker count and find out optimal configuration for ECS fargate. In contrast to on-prem servers where I can grasp on actual number of physical cores, AWS only allow me to configure number of logical cores(via vCPU). I will illustrate how I have tested the performance using &lt;code&gt;gunicorn&lt;/code&gt;, &lt;code&gt;django&lt;/code&gt; and &lt;code&gt;locust&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://flask.palletsprojects.com/en/master/deploying/wsgi-standalone/" rel="noopener noreferrer"&gt;Standalone WSGI Containers - Flask Documentation (1.1.x)&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Gunicorn
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Gunicorn is Python WSGI HTTP Server for UNIX.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;gunicorn django-sample.wsgi:application &lt;span class="nt"&gt;-w&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WORKER_COUNT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; 
&lt;span class="nt"&gt;--threads&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;THREAD_COUNT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-b&lt;/span&gt; 0.0.0.0:8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You generally use following command to run the applicaiton. According to the &lt;a href="https://docs.gunicorn.org/en/stable/design.html#how-many-workers" rel="noopener noreferrer"&gt;design document&lt;/a&gt;, they recommend &lt;code&gt;(2 x $num_cores) + 1&lt;/code&gt; as the number of workers to start off with. The formula is based on the assumption that for a given core, one worker will be reading or writing from the socket while the other worker is processing a request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.gunicorn.org/en/stable/" rel="noopener noreferrer"&gt;Gunicorn - WSGI server - Gunicorn 20.0.4 documentation&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  EC2 vCPU
&lt;/h2&gt;

&lt;p&gt;I need to determine the number of cpu cores for the server. In traditional linux server, you can determine number of core using following commands.&lt;/p&gt;

&lt;p&gt;According to AWS, not all vCPUs are made the same. For T2 instances, 1 vCPU = 1 physical core. For all others, 1 vCPU = 1 logical core.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /proc/cpuinfo
-&amp;gt; cpu_cores
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html" rel="noopener noreferrer"&gt;Optimizing CPU options&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The number of vCPUs for the instance is the number of CPU cores multiplied by the threads per core. To specify a custom number of vCPUs, you must specify a valid number of CPU cores and threads per core for the instance type.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vCPU &lt;span class="o"&gt;=&lt;/span&gt; Physical core &lt;span class="k"&gt;*&lt;/span&gt; threads
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;According to this blog post there were &lt;strong&gt;34%&lt;/strong&gt; drop in multi-threaded performance on logical cores. ****&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.credera.com/blog/technology-solutions/whats-in-a-vcpu-state-of-amazon-ec2-in-2018" rel="noopener noreferrer"&gt;What's in a vCPU: State of Amazon EC2 in 2018 - Credera&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can configure threads per core during instance launch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1wssfc7qlwyn47c0hcx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1wssfc7qlwyn47c0hcx8.png" alt="CPU option"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Methodology
&lt;/h1&gt;
&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/kokospapa8/gunicorn-perf-sample" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0shwakg8i0pypriojeqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0shwakg8i0pypriojeqf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test will run on 3 tier web architecture. Django application will run behind the nginx through gunicorn. Django app will be querying Mysql DB in private subnet.  &lt;/p&gt;
&lt;h3&gt;
  
  
  Instances used
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;t3.micro - 1 CPU

&lt;ul&gt;
&lt;li&gt;cpuinfo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;t3.micro - 2 vCPU (1 cpu * 2 threads)

&lt;ul&gt;
&lt;li&gt;cpuinfo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;t3.xlarge - 2 cpu

&lt;ul&gt;
&lt;li&gt;cpuinfo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;t3.xlarge - 4 vCPU (2 cpu * 2 threads)&lt;/li&gt;
&lt;li&gt;t3.2xlarge - 4 cpu

&lt;ul&gt;
&lt;li&gt;cpuinfo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Locust
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://locust.io/" rel="noopener noreferrer"&gt;Locust - A modern load testing framework&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will use locust to test and analyze the performance under load. Following Locust file will spawn multiple users concurrently with simple SQL query.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;randint&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;locust&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TaskSet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;between&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;HttpUser&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ReadPosts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TaskSet&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nd"&gt;@task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_posts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api/v1/posts/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list posts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response status code:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response content:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;WebsiteUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HttpUser&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;ReadPosts&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;wait_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;between&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Additional setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tested django app with &lt;code&gt;DEBUG = False&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Latency is not considered since test was executed within VPC&lt;/li&gt;
&lt;li&gt;Used &lt;code&gt;serverless&lt;/code&gt; aurora RDS to avoid bottleneck on DB I/O&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Test procedure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Used &lt;code&gt;htop&lt;/code&gt; to record CPU usage of each core and mem usage&lt;/li&gt;
&lt;li&gt;Incremented number of concurrent users by 100 to determine the load to see two thresholds&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_asciinema"&gt;
  
&lt;/div&gt;



&lt;h1&gt;
  
  
  Result
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy2bqg5ojrbnu2y22mfcq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy2bqg5ojrbnu2y22mfcq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Full results:&lt;br&gt;
&lt;a href="https://docs.google.com/spreadsheets/d/1kdw5yOsglHc2r-h_7zNPHM-zAzMWKNZDdCzAx6l920c/edit?usp=drivesdk" rel="noopener noreferrer"&gt;Gunicorn performance test result&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Observations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Single worker cannot fully utilize cpus&lt;/li&gt;
&lt;li&gt;Recommended number of workers (2*core +1) generally perform well as expected.&lt;/li&gt;
&lt;li&gt;There are not much performance difference in physical or logical core.

&lt;ul&gt;
&lt;li&gt;Single threads seem to have slightly better performance&lt;/li&gt;
&lt;li&gt;Therefore, it is efficient to use multiple threads if you consider price and memory usage is not the issue

&lt;ul&gt;
&lt;li&gt;t3.micro 1 core 2 threads - $0.0104 per Hour&lt;/li&gt;
&lt;li&gt;t3.large 2 cores 1 thread - $0.0832 per Hour&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Questions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;What would be the efficient cpu usage cap? 60% - 90%?&lt;/li&gt;
&lt;li&gt;Any other metrics I should have count into experiments?&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  ECS fargate performance
&lt;/h1&gt;

&lt;p&gt;Since I gained general idea how gunicorn performed on ec2 instances. I took it further to test how gunicorn performs on ECS fargate setup with regarding vCPUs allocated. Please refer to the github for task definitions I have used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kokospapa8/gunicorn-perf-sample/tree/master/app/config/ecs" rel="noopener noreferrer"&gt;ecs-task-definition&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since I cannot manage instance in faragte setup, I needed to assign cpu units for each tasks to comare with ec2.  It was impossible to run &lt;code&gt;htop&lt;/code&gt; on fargate since I have no access to the instances. ECS provide &lt;code&gt;container insight&lt;/code&gt; but they are not as accurate nor real-time as I would run htop inside the instances. I only recorded Fail rate according to concurrent user count. &lt;/p&gt;

&lt;p&gt;I didn't assign cpu unit for each task, since two tasks in previous test shared cpu resources in single machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions" rel="noopener noreferrer"&gt;Task definition parameters&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm9r6f5xcjlm4oj5pq4oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm9r6f5xcjlm4oj5pq4oc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Results does not seem to be as consistent as ec2 cores. It seems like more worker counts result better performance proportionally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiac5ttdf8d9ssgd48jsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiac5ttdf8d9ssgd48jsq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  After thoughts
&lt;/h1&gt;

&lt;p&gt;I was not entirely satisfied with the result since I was not able to isolate all variables. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance probably will depend on app behavior (mem&lt;/li&gt;
&lt;li&gt;long running i/o app will probably will have different results&lt;/li&gt;
&lt;li&gt;Test result would have been more accurate with CPU reousrce config on docker-compose file (&lt;a href="https://docs.docker.com/compose/compose-file/#resources" rel="noopener noreferrer"&gt;https://docs.docker.com/compose/compose-file/#resources&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Next Todo
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;ASGI benchmark on different vCPU options&lt;/li&gt;
&lt;li&gt;Use Asyncworker + &lt;code&gt;threads&lt;/code&gt; options&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.python.org/dev/peps/pep-3333/" rel="noopener noreferrer"&gt;PEP 3333 -- Python Web Server Gateway Interface v1.0.1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.kgriffs.com/2012/12/18/uwsgi-vs-gunicorn-vs-node-benchmarks.html" rel="noopener noreferrer"&gt;uWSGI vs. Gunicorn, or How to Make Python Go Faster than Node&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.appdynamics.com/blog/engineering/a-performance-analysis-of-python-wsgi-servers-part-2/" rel="noopener noreferrer"&gt;A Performance Analysis of Python WSGI Servers: Part 2 | Blog | AppDynamics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/building-the-system/gunicorn-3-means-of-concurrency-efbb547674b7" rel="noopener noreferrer"&gt;Better performance by optimizing Gunicorn config&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arunrocks.com/a-guide-to-asgi-in-django-30-and-its-performance/" rel="noopener noreferrer"&gt;A Guide to ASGI in Django 3.0 and its Performance&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.locust.io/en/stable/quickstart.html" rel="noopener noreferrer"&gt;Quick start - Locust 1.1.1 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arunrocks.com/a-guide-to-asgi-in-django-30-and-its-performance/" rel="noopener noreferrer"&gt;A Guide to ASGI in Django 3.0 and its Performance&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.credera.com/blog/technology-solutions/whats-in-a-vcpu-state-of-amazon-ec2-in-2018" rel="noopener noreferrer"&gt;What's in a vCPU: State of Amazon EC2 in 2018 - Credera&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/" rel="noopener noreferrer"&gt;How Amazon ECS manages CPU and memory resources | Amazon Web Services&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>locust</category>
      <category>python</category>
      <category>gunicorn</category>
    </item>
    <item>
      <title>How to deploy django app to ECS Fargate part3</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Fri, 03 Jul 2020 12:40:54 +0000</pubDate>
      <link>https://dev.to/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part3-3i7p</link>
      <guid>https://dev.to/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part3-3i7p</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post was originally published on notion. Click &lt;a href="https://www.notion.so/kokospapa/How-to-deploy-django-app-to-ECS-Fargate-part3-13875ffa0c7e4e1da3d1c46cad0bae94"&gt;here&lt;/a&gt; if you prefer to read in notion page which has better readability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;This blog post illustrate development cycle using django app container. I assume readers are already somewhat familar with docker and docker-compose. Although I used django for app development, it is language-agnostic since the post is about containerized application deployment.&lt;/p&gt;

&lt;p&gt;Walkthough is devided into three parts consisting three different environemt respectively. First part, describes the architecture of the app(api and async worker) and how they are deployed on &lt;code&gt;local&lt;/code&gt; enviroment. Second part is how to deploy the docker containers on cloud using single ec2 instance with on &lt;code&gt;staging&lt;/code&gt; environment. Third part, illustrate how to convert traditional ec2 deployment into ECS using fargate with github actions on &lt;code&gt;prod&lt;/code&gt; environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;local&lt;/code&gt; - run docker containers on desktop/laptop with sqlite and redis server using docker-compose&lt;/p&gt;

&lt;p&gt;&lt;code&gt;stating&lt;/code&gt; - run docker containers on single ec2 instance with mysql RDS and ElasticCache&lt;/p&gt;

&lt;p&gt;&lt;code&gt;prod&lt;/code&gt; - convert stagning setup to ECS Fargate&lt;/p&gt;

&lt;p&gt;You can check out previous part here.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VW75tZgW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--B1QWRgg7--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/367659/1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part1-2ga" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate Part1&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul  3 ・ 4 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VW75tZgW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--B1QWRgg7--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/367659/1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part2-io1" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate Part2&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul  3 ・ 10 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h1&gt;
  
  
  ECS fargate primer
&lt;/h1&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/eq4wL2MiNqo"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;ECS provides container management service that makes it easy to run, stop, and manage Docker containers on a cluster. However you still have to manage container instances with container agent running on instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. It's a good idea to have some understanding on following components of ECS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4T4MARil--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jdcrsy0e3w30s64jxq28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4T4MARil--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jdcrsy0e3w30s64jxq28.png" alt="ecs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Components
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1skBtiLU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pjilmbjc4qe1jq8vti1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1skBtiLU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pjilmbjc4qe1jq8vti1d.png" alt="ecs components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster
&lt;/h3&gt;

&lt;p&gt;Logical grouping of resources that tasks and services are running on. When using the Fargate launch type with tasks within your cluster, ECS manages your cluster resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service
&lt;/h3&gt;

&lt;p&gt;This enables you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. You need service to run them behind  &lt;/p&gt;

&lt;h3&gt;
  
  
  Task definition
&lt;/h3&gt;

&lt;p&gt;json file that describes one or more containers. Task definition is not cluster dependent. You can understand it functions similar as docker-compose file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Task
&lt;/h3&gt;

&lt;p&gt;instantiation of a task definition within a cluster. Each task that uses the Fargate launch type has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task Scheduler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;is responsible for placing tasks within your cluster. There are several different scheduling options available. Fargate currently only support REPLICA which places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html"&gt;Amazon ECS on AWS Fargate&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Fargate tasks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fargate task definitions require that you specify CPU and memory at the task level.&lt;/li&gt;
&lt;li&gt;Fargate task definitions only support the awslogs log driver for the log configuration. This configures your Fargate tasks to send log information to Amazon CloudWatch Logs.&lt;/li&gt;
&lt;li&gt;Put multiple containers in the same task definition if:

&lt;ul&gt;
&lt;li&gt;Containers share a common lifecycle.&lt;/li&gt;
&lt;li&gt;Containers are required to be run on the same underlying host.&lt;/li&gt;
&lt;li&gt;You want your containers to share resources.&lt;/li&gt;
&lt;li&gt;Your containers share data volumes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Walkthrough
&lt;/h1&gt;

&lt;p&gt;We will use the VPC , RDB and Redis we created from part2, refer to the previous post or CF template for the setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container networking is different in fargate
&lt;/h3&gt;

&lt;p&gt;Let's review how container is structued in part 2. Nginx container and app container communictes through 8000 port through birdge network. And worker container will work. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zKDAz3Sl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d09nendzkebbtnges0g9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zKDAz3Sl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d09nendzkebbtnges0g9.png" alt="before"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will configure our task definition like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3Zq8hv1g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1nqu46rj14hbkexajx4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3Zq8hv1g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1nqu46rj14hbkexajx4m.png" alt="afger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are running these two components as two processes on a single EC2 instance, the web tier application process could communicate with the API process on the same machine by using the local loopback interface. The local loopback interface has a special IP address of 127.0.0.1 and hostname of localhost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--069MLVvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z09ct6mpcfukjisqeqek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--069MLVvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z09ct6mpcfukjisqeqek.png" alt="awsvpn"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By making a networking request to this local interface, it bypasses the network interface hardware and instead the operating system just routes network calls from one process to the other directly. This gives the web tier a fast and efficient way to fetch information from the API tier with almost no networking latency.&lt;/p&gt;

&lt;p&gt;We will update nginx image with new conf with above restriction&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;new nginx conf
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# portal
&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;# all requests proxies to app
&lt;/span&gt;  &lt;span class="n"&gt;location&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;proxy_pass&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="mf"&gt;127.0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt;  &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Real&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;IP&lt;/span&gt;  &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="n"&gt;remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt;  &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Forwarded&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;For&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="n"&gt;proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt; &lt;span class="n"&gt;Host&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;proxy_redirect&lt;/span&gt; &lt;span class="n"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# domain localhost
&lt;/span&gt;  &lt;span class="n"&gt;server_name&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This sends a local network request, which goes directly from one container to the other over the local loopback interface without traversing the network. This deployment strategy allows for fast and efficient communication between two tightly coupled containers. &lt;/p&gt;

&lt;p&gt;We will build new nginx image with this new conf and push the new image to ecr repository to include in the task definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# make sure you have &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/nginx ECR created&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;docker build &lt;span class="nt"&gt;-f&lt;/span&gt; config/app/Docker_nginx &lt;span class="nt"&gt;-t&lt;/span&gt;  &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/nginx:latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;docker push &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/nginx:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Create ECS execution role
&lt;/h3&gt;

&lt;p&gt;create &lt;code&gt;sampleECSTaskExecutionRole&lt;/code&gt; with following policies and trust relationships&lt;/p&gt;

&lt;p&gt;Policies&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AmazonECSTaskExecutionRolePolicy&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You need to attach following policies for environment variable setup on task (explained in later step)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Managed Policy AmazonSSMReadOnlyAccess

&lt;ul&gt;
&lt;li&gt;you might want tighter restriction on resources for actual development&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Inline policy for decryption&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Version"&lt;/span&gt;: &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;,
    &lt;span class="s2"&gt;"Statement"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"Sid"&lt;/span&gt;: &lt;span class="s2"&gt;"VisualEditor0"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Effect"&lt;/span&gt;: &lt;span class="s2"&gt;"Allow"&lt;/span&gt;,
            &lt;span class="s2"&gt;"Action"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"kms:Decrypt"&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"Resource"&lt;/span&gt;: &lt;span class="s2"&gt;"*"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;make sure to add trust entities for &lt;code&gt;ecs-tasks.amazonaws.com&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Security Group for ecs cluster
&lt;/h3&gt;

&lt;p&gt;Let's create secrity group for ecs cluster named - &lt;code&gt;ecs-sg&lt;/code&gt;. I recommend copy &lt;code&gt;ecs-sample-ec2&lt;/code&gt; from previous post which consists inbound rule &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;allow 80 port to public&lt;/li&gt;
&lt;li&gt;allow 80 to &lt;code&gt;ecs-sample-lb&lt;/code&gt; security group&lt;/li&gt;
&lt;li&gt;allow 6379 to &lt;code&gt;ecs-sample-redis&lt;/code&gt; security group&lt;/li&gt;
&lt;li&gt;allow 3306 to &lt;code&gt;ecs-sample-mysql&lt;/code&gt; security group&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ql2TIASe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nqqyhhks7et8asq1ials.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ql2TIASe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nqqyhhks7et8asq1ials.png" alt="ecs-sg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You also need to allow &lt;/p&gt;

&lt;p&gt;&lt;code&gt;ecs-sample-redis&lt;/code&gt; and &lt;code&gt;ecs-sample-mysql&lt;/code&gt; to allow traffic for each port from &lt;code&gt;ecs-sg&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Loadbalancer
&lt;/h2&gt;

&lt;p&gt;Let's craete a new loadbalancer which will be used for the service who manages connection between lb and tasks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select ALB&lt;/li&gt;
&lt;li&gt;name - &lt;code&gt;ecs-fargate&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;select VPC created from previous post&lt;/li&gt;
&lt;li&gt;listener and target group will be managed by service so you can create empty listener and target group. we will delete them on following step.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eWhnQ_yv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/00p16t86znum60ihp7zg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eWhnQ_yv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/00p16t86znum60ihp7zg.png" alt="alb conf1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JZ_2g2cb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/goq6n5v9h45gsrseelrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JZ_2g2cb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/goq6n5v9h45gsrseelrf.png" alt="alb conf2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;delete listener and target group you created from previous alb creation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vekjDNIn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2nk3avedkrhykl0vflce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vekjDNIn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2nk3avedkrhykl0vflce.png" alt="target group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lKumCIEU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nhg8fwrom3d8wwaxgxj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lKumCIEU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nhg8fwrom3d8wwaxgxj4.png" alt="target group delete"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;you can also use cli to create ALB instead of using console. This way, you don't need to create empty listener and target group whic will be deleted later&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws elbv2 create-load-balancer &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;ecs-fargte&amp;gt;  &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--subnets&lt;/span&gt; &amp;lt;subnet-12345678&amp;gt; &amp;lt;subnet-23456789&amp;gt; &lt;span class="nt"&gt;--security-groups&lt;/span&gt; &amp;lt;ecs-sg&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Take a note on DNS name of this load balancer for later use. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Kip03pAw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0yoopl2z4pi1e7whr2ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kip03pAw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0yoopl2z4pi1e7whr2ty.png" alt="Alt dns"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Create Cluster
&lt;/h2&gt;

&lt;p&gt;We will create empty cluster first. Goto &lt;a href="https://us-east-2.console.aws.amazon.com/ecs/home?region=us-east-2#/clusters"&gt;ECS menu&lt;/a&gt; on console. (make sure you are in correct region) Create first cluster template&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w5Zw1qQn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v55omyi0ma15gx0pvrem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w5Zw1qQn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v55omyi0ma15gx0pvrem.png" alt="cluster create1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--frxEKxVU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/egd8s0g5v1dtkgmqj4bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--frxEKxVU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/egd8s0g5v1dtkgmqj4bc.png" alt="cluster create2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cluster is created &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iGle8E5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r4mjpocip0vommu7j04g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iGle8E5K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r4mjpocip0vommu7j04g.png" alt="cluster creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively you can create cluster with following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecs create-cluster &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; ecs-sample
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Create task definition
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Parameter store
&lt;/h3&gt;

&lt;p&gt;Part of problem using fargate is that you cannot pass environment variable as we used to because we do not have ec2 access. Therefore we will be using parameter store to inject environment variables to tasks when they start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YigUPMaS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/borlrnuinb00tirahly9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YigUPMaS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/borlrnuinb00tirahly9.png" alt="parameter store"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click  to AWS Systems Manager on console menu.&lt;/p&gt;

&lt;p&gt;Click Paramter Store on left menu.  &lt;a href="https://us-east-2.console.aws.amazon.com/systems-manager/parameters?region=us-east-2"&gt;link&lt;/a&gt;(make sure you are on correct region)&lt;/p&gt;

&lt;p&gt;click &lt;code&gt;createk parameter&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Fill in the input as following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cncmKIcS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/48yksuao8vb8rvc6jrnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cncmKIcS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/48yksuao8vb8rvc6jrnu.png" alt="paramstore1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mBf6okBx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ercgql277x1sk7gcyyvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mBf6okBx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ercgql277x1sk7gcyyvv.png" alt="pramstore2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will create value for following environment varibales&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SECRET_KEY&lt;/li&gt;
&lt;li&gt;REDIS_HOST&lt;/li&gt;
&lt;li&gt;DB_PASSWORD&lt;/li&gt;
&lt;li&gt;DB_HOST&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alternatively you can use cli to create secure strings&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws ssm put-parameter &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"/ecs-sample/prod/&amp;lt;ENV_NAME&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--type&lt;/span&gt; &lt;span class="s2"&gt;"SecureString"&lt;/span&gt; &lt;span class="nt"&gt;--value&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;value&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;ENV_NAME&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;region&amp;gt;"&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Version"&lt;/span&gt;: 1,
    &lt;span class="s2"&gt;"Tier"&lt;/span&gt;: &lt;span class="s2"&gt;"Standard"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;results &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kSgt3BZx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8y74ku3prpvwrccpimvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kSgt3BZx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8y74ku3prpvwrccpimvs.png" alt="paramstore result"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Task definition
&lt;/h3&gt;

&lt;p&gt;As I mentioned earlier taks definition works smilar to docker-compose.  Just like staging evn, we will have two task definitions and nginx container + api container and worker container&lt;/p&gt;

&lt;p&gt;Using task definition template, you can fill in following configurations&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws ecs register-task-definition &lt;span class="nt"&gt;--generate-cli-skeleton&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;sample-task-definition&lt;/p&gt;

&lt;p&gt;Parameters&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Task family – the name of the task, and each family can have multiple revisions.&lt;/li&gt;
&lt;li&gt;IAM task role – specifies the permissions that containers in the task should have.&lt;/li&gt;
&lt;li&gt;Network mode – determines how the networking is configured for your containers. (fargate only supports awsvpn currently)&lt;/li&gt;
&lt;li&gt;Container definitions – specify which image to use, how much CPU and memory the container are allocated, and many more options.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;refer to following link for each parameters&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html"&gt;Task definition parameters&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"family"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
    &lt;span class="s2"&gt;"taskRoleArn"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
    &lt;span class="s2"&gt;"executionRoleArn"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
    &lt;span class="s2"&gt;"networkMode"&lt;/span&gt;: &lt;span class="s2"&gt;"awsvpc"&lt;/span&gt;,
    &lt;span class="s2"&gt;"containerDefinitions"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"image"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"repositoryCredentials"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"credentialsParameter"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"cpu"&lt;/span&gt;: 0,
            &lt;span class="s2"&gt;"memory"&lt;/span&gt;: 0,
            &lt;span class="s2"&gt;"memoryReservation"&lt;/span&gt;: 0,
            &lt;span class="s2"&gt;"links"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"portMappings"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"containerPort"&lt;/span&gt;: 0,
                    &lt;span class="s2"&gt;"hostPort"&lt;/span&gt;: 0,
                    &lt;span class="s2"&gt;"protocol"&lt;/span&gt;: &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"essential"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
            &lt;span class="s2"&gt;"entryPoint"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"command"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"environment"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"environmentFiles"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"s3"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"mountPoints"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"sourceVolume"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"containerPath"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"readOnly"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"volumesFrom"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"sourceContainer"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"readOnly"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"linuxParameters"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"capabilities"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"add"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                        &lt;span class="s2"&gt;""&lt;/span&gt;
                    &lt;span class="o"&gt;]&lt;/span&gt;,
                    &lt;span class="s2"&gt;"drop"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                        &lt;span class="s2"&gt;""&lt;/span&gt;
                    &lt;span class="o"&gt;]&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="s2"&gt;"devices"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                    &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"hostPath"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                        &lt;span class="s2"&gt;"containerPath"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                        &lt;span class="s2"&gt;"permissions"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                            &lt;span class="s2"&gt;"read"&lt;/span&gt;
                        &lt;span class="o"&gt;]&lt;/span&gt;
                    &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="o"&gt;]&lt;/span&gt;,
                &lt;span class="s2"&gt;"initProcessEnabled"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
                &lt;span class="s2"&gt;"sharedMemorySize"&lt;/span&gt;: 0,
                &lt;span class="s2"&gt;"tmpfs"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                    &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"containerPath"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                        &lt;span class="s2"&gt;"size"&lt;/span&gt;: 0,
                        &lt;span class="s2"&gt;"mountOptions"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                            &lt;span class="s2"&gt;""&lt;/span&gt;
                        &lt;span class="o"&gt;]&lt;/span&gt;
                    &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="o"&gt;]&lt;/span&gt;,
                &lt;span class="s2"&gt;"maxSwap"&lt;/span&gt;: 0,
                &lt;span class="s2"&gt;"swappiness"&lt;/span&gt;: 0
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"secrets"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"dependsOn"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"containerName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"condition"&lt;/span&gt;: &lt;span class="s2"&gt;"HEALTHY"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"startTimeout"&lt;/span&gt;: 0,
            &lt;span class="s2"&gt;"stopTimeout"&lt;/span&gt;: 0,
            &lt;span class="s2"&gt;"hostname"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"user"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"workingDirectory"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"disableNetworking"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
            &lt;span class="s2"&gt;"privileged"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
            &lt;span class="s2"&gt;"readonlyRootFilesystem"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
            &lt;span class="s2"&gt;"dnsServers"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"dnsSearchDomains"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"extraHosts"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"hostname"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"ipAddress"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"dockerSecurityOptions"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"interactive"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
            &lt;span class="s2"&gt;"pseudoTerminal"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
            &lt;span class="s2"&gt;"dockerLabels"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"KeyName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"ulimits"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;"msgqueue"&lt;/span&gt;,
                    &lt;span class="s2"&gt;"softLimit"&lt;/span&gt;: 0,
                    &lt;span class="s2"&gt;"hardLimit"&lt;/span&gt;: 0
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"logConfiguration"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"logDriver"&lt;/span&gt;: &lt;span class="s2"&gt;"awslogs"&lt;/span&gt;,
                &lt;span class="s2"&gt;"options"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"KeyName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="s2"&gt;"secretOptions"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                    &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                        &lt;span class="s2"&gt;"valueFrom"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                    &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="o"&gt;]&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"healthCheck"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"command"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                    &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;]&lt;/span&gt;,
                &lt;span class="s2"&gt;"interval"&lt;/span&gt;: 0,
                &lt;span class="s2"&gt;"timeout"&lt;/span&gt;: 0,
                &lt;span class="s2"&gt;"retries"&lt;/span&gt;: 0,
                &lt;span class="s2"&gt;"startPeriod"&lt;/span&gt;: 0
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"systemControls"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"namespace"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"resourceRequirements"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"GPU"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;]&lt;/span&gt;,
            &lt;span class="s2"&gt;"firelensConfiguration"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"fluentd"&lt;/span&gt;,
                &lt;span class="s2"&gt;"options"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"KeyName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"volumes"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"host"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"sourcePath"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"dockerVolumeConfiguration"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"scope"&lt;/span&gt;: &lt;span class="s2"&gt;"task"&lt;/span&gt;,
                &lt;span class="s2"&gt;"autoprovision"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;,
                &lt;span class="s2"&gt;"driver"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                &lt;span class="s2"&gt;"driverOpts"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"KeyName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="s2"&gt;"labels"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"KeyName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="s2"&gt;"efsVolumeConfiguration"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"fileSystemId"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                &lt;span class="s2"&gt;"rootDirectory"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                &lt;span class="s2"&gt;"transitEncryption"&lt;/span&gt;: &lt;span class="s2"&gt;"ENABLED"&lt;/span&gt;,
                &lt;span class="s2"&gt;"transitEncryptionPort"&lt;/span&gt;: 0,
                &lt;span class="s2"&gt;"authorizationConfig"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="s2"&gt;"accessPointId"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                    &lt;span class="s2"&gt;"iam"&lt;/span&gt;: &lt;span class="s2"&gt;"ENABLED"&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"placementConstraints"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"memberOf"&lt;/span&gt;,
            &lt;span class="s2"&gt;"expression"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"requiresCompatibilities"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"EC2"&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"cpu"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
    &lt;span class="s2"&gt;"memory"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
    &lt;span class="s2"&gt;"tags"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"key"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;,
    &lt;span class="s2"&gt;"pidMode"&lt;/span&gt;: &lt;span class="s2"&gt;"task"&lt;/span&gt;,
    &lt;span class="s2"&gt;"ipcMode"&lt;/span&gt;: &lt;span class="s2"&gt;"none"&lt;/span&gt;,
    &lt;span class="s2"&gt;"proxyConfiguration"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"type"&lt;/span&gt;: &lt;span class="s2"&gt;"APPMESH"&lt;/span&gt;,
        &lt;span class="s2"&gt;"containerName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
        &lt;span class="s2"&gt;"properties"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
            &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"name"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
                &lt;span class="s2"&gt;"value"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;]&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="s2"&gt;"inferenceAccelerators"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"deviceName"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;,
            &lt;span class="s2"&gt;"deviceType"&lt;/span&gt;: &lt;span class="s2"&gt;""&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create task definitions&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws ecs register-task-definition &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cli-input-json&lt;/span&gt; file://&amp;lt;path_to_json_file&amp;gt;/task-definition-api.json
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ecs register-task-definition &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cli-input-json&lt;/span&gt; file://&amp;lt;path_to_json_file&amp;gt;/task-definition-worker.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;you will be able to see 2 task definitions registered in your ECS service. Inspect configuration for each task definitions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4ZVfztHK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5t6jcvth8cfo53za57xv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4ZVfztHK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5t6jcvth8cfo53za57xv.png" alt="task-def results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create services
&lt;/h2&gt;

&lt;p&gt;Now you are ready to create ECS service which will fire up the tasks. Go to cluster and press create button on the services tab.&lt;/p&gt;

&lt;h3&gt;
  
  
  API task
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;1) configure service&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gmB1VHPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lf2oy4x5al2p23emxrlt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gmB1VHPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lf2oy4x5al2p23emxrlt.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;2) configure network&lt;/p&gt;

&lt;p&gt;Network&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;we will select VPC from part 2 and select security group we created previously &lt;code&gt;ecs-sg&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Choose public subnet for the &lt;code&gt;ecs-sample-api&lt;/code&gt;  task&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ozEp2sjw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x14wp1tvex043kwj5z6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ozEp2sjw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x14wp1tvex043kwj5z6d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Load Balancer

- select `Application Load Balancer` for Load blancer type*
    - Production listener port : 80
    - select `ecs-fargate` load balancer we have created in this post
    - once you select the load balance you will be able to choose `container to load blaance`. Service automatically detect ports opened. Choose `nginx:80:80`. Click add to load balancer
    - You will be able to configure the health check path. Update to `/api/healthcheck`/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Vu1QQbC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nrzg26pr654rmxppjqty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Vu1QQbC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nrzg26pr654rmxppjqty.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eaYwnlUf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gxyq60godok639phfxq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eaYwnlUf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gxyq60godok639phfxq8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Service discovery

- click service discovery
- update namespace as you like, I used `api`
- update serive discovery name `ecs-sample-api`
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I-DmgFDF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k58djbvuhomxvt2ut5a6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I-DmgFDF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k58djbvuhomxvt2ut5a6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3) set auto scaling(optional)

&lt;ul&gt;
&lt;li&gt;we will skip this part&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;4) Review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---2j47UJv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nk0812fabmnernxqeiu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---2j47UJv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nk0812fabmnernxqeiu2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8d4yIxuJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/p1bnugh3w17xmgj3o98h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8d4yIxuJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/p1bnugh3w17xmgj3o98h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XCtdjHvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/635ovr6bkwny1h39l0l0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XCtdjHvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/635ovr6bkwny1h39l0l0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Worker task
&lt;/h3&gt;

&lt;p&gt;For &lt;code&gt;worker&lt;/code&gt; task, reapeat above step except&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;choose private subnet&lt;/li&gt;
&lt;li&gt;no load balancer&lt;/li&gt;
&lt;li&gt;no service discovery&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Service
&lt;/h3&gt;

&lt;p&gt;Once everything is set let's deploy new service and wait for all service status to turn to green.&lt;/p&gt;

&lt;p&gt;Go back to service you created and go to tasks tab, if you see all service running you will see following screenshot, however if you don't see any tasks running search tasks with stopped status and inspect logs(you might need to update task definition to configure logs) for any misconfigurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--khY9l1ZG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/02n8odgnsen3hxul0jhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--khY9l1ZG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/02n8odgnsen3hxul0jhm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--umGdowBp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4qgjgxrqh22llauj7eyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--umGdowBp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4qgjgxrqh22llauj7eyl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's check elb and route 53 to make sure everythign work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VKTdYHK---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jrvfwa02jogpsxwpr2nr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VKTdYHK---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jrvfwa02jogpsxwpr2nr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Service discovery and healthcheck seems to work fine.&lt;/p&gt;

&lt;p&gt;To access the nginx from browser, type in  DNS name of the load balancer to see if everything is working as we expected.&lt;/p&gt;
&lt;h1&gt;
  
  
  Continuous Deployment
&lt;/h1&gt;

&lt;p&gt;Once everything is deployed, there is a problem if I need to update a source image. I need to repeat all steps above include&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build new docker image&lt;/li&gt;
&lt;li&gt;Push docker image to ECR&lt;/li&gt;
&lt;li&gt;Create new task definition with new docker image&lt;/li&gt;
&lt;li&gt;Update service to run new task definition revision&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Github action
&lt;/h2&gt;

&lt;p&gt;We can automate this step with github action. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/features/actions"&gt;Features * GitHub Actions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take a look at following file in the repo &lt;code&gt;.github/workflows/ecs_api.yml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;workflow file&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;on:&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;push:&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;branches:&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;master&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Deploy&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Hangfive&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Amazon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ECS&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;jobs:&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;deploy:&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Deploy&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;runs-on:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ubuntu-latest&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;steps:&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Checkout&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;uses:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;actions/checkout@v&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Configure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;AWS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;credentials&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;uses:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;aws-actions/configure-aws-credentials@v&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;with:&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;aws-access-key-id:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;secrets.AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;aws-secret-access-key:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;secrets.AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;aws-region:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;us-east&lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Login&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Amazon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ECR&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;id:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;login-ecr&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;uses:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;aws-actions/amazon-ecr-login@v&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Build,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;tag,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;push&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Amazon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ECR&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;id:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;build-image&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;env:&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;ECR_REGISTRY:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;982947632035&lt;/span&gt;&lt;span class="err"&gt;.dkr.ecr.us-east&lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="err"&gt;.amazonaws.com&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;ECR_REPOSITORY:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;hangfive&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;IMAGE_TAG:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;github.sha&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;run:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Build&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;container&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;push&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ECR&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;so&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;can&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;be&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;deployed&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ECS.&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;build&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-f&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;config/app/Dockerfile_app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;push&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Fill&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;new&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ID&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Amazon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ECS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;task&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;definition&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;id:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;task-def&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;uses:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;aws-actions/amazon-ecs-render-task-definition@v&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;with:&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;task-definition:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;config/ecs/task-definition.json&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;container-name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;app&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;image:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;steps.build-image.outputs.image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Deploy&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Amazon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ECS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;task&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;definition&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;uses:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;aws-actions/amazon-ecs-deploy-task-definition@v&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;with:&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;task-definition:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;steps.task-def.outputs.task-definition&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;service:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;hangfive-prod-ecs-fargate&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;cluster:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;hangfive-prod&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;wait-for-service-stability:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;This yaml file describes following actions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;triggers on every master branch push&lt;/p&gt;

&lt;p&gt;&lt;a href="https://help.github.com/en/actions/reference/events-that-trigger-workflows"&gt;Events that trigger workflows&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;checkout the source&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AWS CLI config ( secret from github)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of user with appropriate permission&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets"&gt;Creating and storing encrypted secrets&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;log into ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;build docker image and tag with different hash everytime (for immutability)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;update task definition then deploy new task definition&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;code&gt;actions&lt;/code&gt; tab on your repository for the progress.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l2ZV0uPI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vr4tta8v09hbjf51i0gh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l2ZV0uPI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vr4tta8v09hbjf51i0gh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Teardown
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;List of aws resources you need to delete

&lt;ul&gt;
&lt;li&gt;EC2 instances, ALB, target group, autoscaling&lt;/li&gt;
&lt;li&gt;ECS Services, cluster, task definitions&lt;/li&gt;
&lt;li&gt;ECR&lt;/li&gt;
&lt;li&gt;SSM parameter store&lt;/li&gt;
&lt;li&gt;Route 53 hosted zone&lt;/li&gt;
&lt;li&gt;RDS

&lt;ul&gt;
&lt;li&gt;subnet group, cluster paramter group&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Elastic Cache

&lt;ul&gt;
&lt;li&gt;subnet group, parmater group&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;VPC

&lt;ul&gt;
&lt;li&gt;security group, ncl, subnets, IGW, NAT gateway, EIP&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Future todos
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Logging / Monitoring setup with cloudwatch&lt;/li&gt;
&lt;li&gt;CI - pytest integration with github action before merge&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ko/blogs/compute/task-networking-in-aws-fargate/"&gt;Task Networking in AWS Fargate | Amazon Web Services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html"&gt;Task Networking with the awsvpc Network Mode&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/network/bridge/"&gt;Use bridge networks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.44bits.io/ko/post/getting-started-with-ecs-fargate"&gt;AWS ECS의 매니지드 컨테이너 AWS 파게이트(AWS Fargate) 시작하기&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_AWSCLI_Fargate.html"&gt;Tutorial: Creating a Cluster with a Fargate Task Using the AWS CLI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-data-security-container-task/?nc1=h_ls"&gt;Pass Secrets or Sensitive Data Securely to Containers in Amazon ECS Tasks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ssm/put-parameter.html"&gt;put-parameter - AWS CLI 1.18.89 Command Reference&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/marketplace/actions/amazon-ecs-deploy-task-definition-action-for-github-actions"&gt;Amazon ECS "Deploy Task Definition" Action for GitHub Actions - GitHub Marketplace&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-data-security-container-task/?nc1=h_ls"&gt;Pass Secrets or Sensitive Data Securely to Containers in Amazon ECS Tasks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.community/t/trigger-workflow-only-on-pull-request-merge/17359"&gt;Trigger workflow only on pull request MERGE&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to deploy django app to ECS Fargate Part2</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Fri, 03 Jul 2020 12:00:25 +0000</pubDate>
      <link>https://dev.to/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part2-io1</link>
      <guid>https://dev.to/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part2-io1</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post was originally published on notion. Click &lt;a href="https://www.notion.so/kokospapa/How-to-deploy-django-app-to-ECS-Fargate-part2-93a372ebd8b941cea99d03de8ca98863"&gt;here&lt;/a&gt; if you prefer to read in notion page which has better readability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;This blog post illustrate development cycle using django app container. I assume readers are already somewhat familar with docker and docker-compose. Although I used django for app development, it is language-agnostic since the post is about containerized application deployment.&lt;/p&gt;

&lt;p&gt;Walkthough is devided into three parts consisting three different environemt respectively. First part, describes the architecture of the app(api and async worker) and how they are deployed on &lt;code&gt;local&lt;/code&gt; enviroment. Second part is how to deploy the docker containers on cloud using single ec2 instance with on &lt;code&gt;staging&lt;/code&gt; environment. Third part, illustrate how to convert traditional ec2 deployment into ECS using fargate with github actions on &lt;code&gt;prod&lt;/code&gt; environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;local&lt;/code&gt; - run docker containers on desktop/laptop with sqlite and redis server using docker-compose&lt;/p&gt;

&lt;p&gt;&lt;code&gt;stating&lt;/code&gt; - run docker containers on single ec2 instance with mysql RDS and ElasticCache&lt;/p&gt;

&lt;p&gt;&lt;code&gt;prod&lt;/code&gt; - convert stagning setup to ECS Fargate&lt;/p&gt;

&lt;p&gt;For part1, click here.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VW75tZgW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--B1QWRgg7--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/367659/1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part1-2ga" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate Part1&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul  3 ・ 4 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;You can skip to part3 if you are already familiar with AWS architectures.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VW75tZgW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--B1QWRgg7--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/367659/1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part3-3i7p" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate part3&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul  3 ・ 12 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h1&gt;
  
  
  Staging Infra Setup
&lt;/h1&gt;

&lt;p&gt;Before going straight to ECS deployment, we will setup up a &lt;code&gt;staging&lt;/code&gt; env to test the application using ec2 compute instance and other AWS cloud services.  If you are familar with AWS infra and confident with ecs setup, you can skip this part. However I will be using same vpc, redis and mysql for &lt;code&gt;production&lt;/code&gt; env so it would be worth while to take a look at the setup. (you should use different vpc, redis and mysql for your actual &lt;code&gt;production&lt;/code&gt; deployment)&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;staging&lt;/code&gt; cloud architecture will consist following AWS services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with public and private subnet&lt;/li&gt;
&lt;li&gt;EC2 instance (in public subnet)&lt;/li&gt;
&lt;li&gt;ALB in front of ec2 instance&lt;/li&gt;
&lt;li&gt;RDS mysql (in private subnet)&lt;/li&gt;
&lt;li&gt;ElasticCache - Redis (private subnet)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rF7jPD4c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v942mc1r16rjzd6kpbp8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rF7jPD4c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v942mc1r16rjzd6kpbp8.png" alt="staging infra"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Refer to the cloudformation for detailed configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kokospapa8/ecs-fargate-sample-app/blob/master/config/cloudformation/cf-ecs-sample.json"&gt;kokospapa8/ecs-fargate-sample-app&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Preview&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Network - VPC&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"VpcEcsSample"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::EC2::VPC"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CidrBlock"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.10.0.0/16"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"InstanceTenancy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"EnableDnsSupport"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"EnableDnsHostnames"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ecs-sample"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"EcsSamplePrivate1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::EC2::Subnet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CidrBlock"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.10.11.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AvailabilityZone"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-west-2a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"VpcId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Ref"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VpcEcsSample"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ecs-sample-private 1"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"DependsOn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VpcEcsSample"&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"EcsSamplePublic1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::EC2::Subnet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CidrBlock"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.10.1.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AvailabilityZone"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-west-2a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"VpcId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Ref"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VpcEcsSample"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ecs-sample-public 1"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"DependsOn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VpcEcsSample"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"IgwEcsSample"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::EC2::InternetGateway"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ecs-sample IGW"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"DependsOn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VpcEcsSample"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"NATGateway"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::EC2::NatGateway"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"AllocationId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Fn::GetAtt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"EipNat"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"AllocationId"&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"SubnetId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Ref"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EcsSamplePublic1"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"DependsOn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"EcsSamplePublic1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"EipNat"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"IgwEcsSample"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"GatewayAttachment"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;RDS&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"RDSCluster"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS::RDS::DBCluster"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Properties"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"MasterUsername"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"Ref"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DBUsername"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"MasterUserPassword"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"Ref"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DBPassword"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"DBClusterIdentifier"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ecs-sample"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Engine"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aurora"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"EngineVersion"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5.6.10a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"EngineMode"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"serverless"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"ScalingConfiguration"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"AutoPause"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"MinCapacity"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"MaxCapacity"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"SecondsUntilAutoPause"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Redis&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;cacheecssample&lt;/span&gt;&lt;span class="mi"&gt;001&lt;/span&gt;&lt;span class="s2"&gt;": {
      "&lt;/span&gt;&lt;span class="err"&gt;Type&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="err"&gt;AWS::ElastiCache::CacheCluster&lt;/span&gt;&lt;span class="s2"&gt;",
      "&lt;/span&gt;&lt;span class="err"&gt;Properties&lt;/span&gt;&lt;span class="s2"&gt;": {
        "&lt;/span&gt;&lt;span class="err"&gt;AutoMinorVersionUpgrade&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;AZMode&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="err"&gt;single-az&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;CacheNodeType&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="err"&gt;cache.t&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;.micro&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;Engine&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="err"&gt;redis&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;EngineVersion&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;NumCacheNodes&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;PreferredAvailabilityZone&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="err"&gt;us-west&lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="err"&gt;b&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;PreferredMaintenanceWindow&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="err"&gt;thu:&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="err"&gt;-thu:&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="s2"&gt;",
        "&lt;/span&gt;&lt;span class="err"&gt;CacheSubnetGroupName&lt;/span&gt;&lt;span class="s2"&gt;": {
          "&lt;/span&gt;&lt;span class="err"&gt;Ref&lt;/span&gt;&lt;span class="s2"&gt;": "&lt;/span&gt;&lt;span class="err"&gt;cachesubnetecssampleredissubnetgroup&lt;/span&gt;&lt;span class="s2"&gt;"
        },
        "&lt;/span&gt;&lt;span class="err"&gt;VpcSecurityGroupIds&lt;/span&gt;&lt;span class="s2"&gt;": [
          {
            "&lt;/span&gt;&lt;span class="err"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="s2"&gt;": [
              "&lt;/span&gt;&lt;span class="err"&gt;sgecssampleredis&lt;/span&gt;&lt;span class="s2"&gt;",
              "&lt;/span&gt;&lt;span class="err"&gt;GroupId&lt;/span&gt;&lt;span class="s2"&gt;"
            ]
          }
        ]
      }
    },
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This template is preconfigured to &lt;code&gt;us-west-2&lt;/code&gt; region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrate docker images to ECR
&lt;/h2&gt;

&lt;p&gt;Before setting up the infrastructure I will move docker images to &lt;code&gt;elastic container registry&lt;/code&gt;. If you want to still use docker hub or other private docker image repository, you can skip this part&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ecr setup&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html"&gt;Getting started with Amazon ECR using the AWS CLI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;log into ecr&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;aws&lt;/span&gt; &lt;span class="n"&gt;ecr&lt;/span&gt; &lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;login&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;east&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;include&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Login&lt;/span&gt; &lt;span class="n"&gt;Succeeded&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Create repository with cli&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt; &lt;span class="n"&gt;ecr&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="n"&gt;ecs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;nginx&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;east&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt; &lt;span class="n"&gt;ecr&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="n"&gt;ecs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;api&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;east&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt; &lt;span class="n"&gt;ecr&lt;/span&gt; &lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="n"&gt;ecs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="n"&gt;us&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;east&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;or create repository on the console &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d4besbwX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2y5uz0ojxspz7zipbm67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d4besbwX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2y5uz0ojxspz7zipbm67.png" alt="ecr creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Build docker image if you have not build the image from previoud post
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ docker build -f config/app/Docker_base -t ecs-fargate-sample_base:latest .
$ docker build -f config/app/Docker_app -t ecs-fargate-sample_app:latest .
$ docker build -f config/app/Docker_worker -f ecs-fargate-sample_app:latest .
```
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Tag and push images
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
$ docker tag ecs-fargate-sample_app:base &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/ecs-sample-api:base
$ docker push &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/ecs-sample-api:base
$ docker tag ecs-fargate-sample_app:latest &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/ecs-sample-api:latest
$ docker push &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/ecs-sample-api:latest
$ docker tag ecs-fargate-sample_worker:base &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/ecs-sample-worker:latest
$ docker push &amp;lt;aws_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/ecs-sample-worker:latest
```
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Check the console for the images 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h242inld--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uu5yul3ky21gtebmlqe8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h242inld--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uu5yul3ky21gtebmlqe8.png" alt="ecr image pushed"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  IAM role
&lt;/h2&gt;

&lt;p&gt;You need access to ECR from your ec2 instances, instead of embedding AWS accesskey in the instance I will attach an iam role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ecs-sample-ec2_role (optional)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I will give permission to read from ECR&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html"&gt;Amazon ECR Managed Policies&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Network
&lt;/h2&gt;
&lt;h3&gt;
  
  
  VPC
&lt;/h3&gt;

&lt;p&gt;I have created VPC with CIDR 172.10.0.0/16. refer to the images attached for reference. For production I advise you to make more subnets in multi-AZ for better availability. For the sake of the post, I have just used two available zones (serverless mysql needs at least two AZs)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Public subnet&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pst36p8---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fksef5dwu7ew8pl33wku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pst36p8---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fksef5dwu7ew8pl33wku.png" alt="public subnet"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CIDR - 172.10.1.0/24, 172.10.2.0/24

Components

- ec2 instance for `api`
- NAT GATEWAY for `worker` instance
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Private subnet&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2ZtnpCnl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xfqz8mnsue76ctsxqc6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2ZtnpCnl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xfqz8mnsue76ctsxqc6a.png" alt="private subnet"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CIDR - 172.10.11.0/24, 172.10.12.0/24

Components

- mysql
- elastic cache
- ec2 instance for `worker`
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Security group
&lt;/h3&gt;

&lt;p&gt;I have added minial security measure for the sake of the post. This is just sample you should not use this for production. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Setup&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ecs-sample-VPC&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Opened all ports to public (not safe)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ecs-sample-loadbalancer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;80 and 443 to public&lt;/li&gt;
&lt;li&gt;80 with ec2-api instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ecs-sample-mysql&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3306 with ec2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ecs-sample-ec2-api&lt;/strong&gt; and &lt;strong&gt;ecs-sample-ec2-worker&lt;/strong&gt; (they should be different but I just used single sg for both ec2, should block 80 access for particular ec2 instance)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3306 mysql&lt;/li&gt;
&lt;li&gt;6379 redis,&lt;/li&gt;
&lt;li&gt;80 for ALB and public ip access&lt;/li&gt;
&lt;li&gt;22 for ssh&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ecs-sample-redis&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;6379 with ec2
### ALB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Created Alb with simple setup attached to public subnet. We don't have instance yet so just created target group with no instance attached yet. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;configuration

&lt;ul&gt;
&lt;li&gt;name: ecs-sanple-lb&lt;/li&gt;
&lt;li&gt;Listener: port 80&lt;/li&gt;
&lt;li&gt;Security group: ecs-sample-lb&lt;/li&gt;
&lt;li&gt;VPC and subnet - public subnet created above&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LvimxiHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wjkor7dbv8a4sdfyvp2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LvimxiHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wjkor7dbv8a4sdfyvp2z.png" alt="alb config 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_5D5lG0q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b6xt3xedwhntkuwezben.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_5D5lG0q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b6xt3xedwhntkuwezben.png" alt="alb result 1"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Database
&lt;/h2&gt;
&lt;h3&gt;
  
  
  RDS - Mysql
&lt;/h3&gt;

&lt;p&gt;I will use serverless mysql instead of traditional mysql instance to minimize cost. (serverless mysql only support mysql engine version up to 5.6)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;configuration

&lt;ul&gt;
&lt;li&gt;database - &lt;code&gt;sample&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;username - &lt;code&gt;admin&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;network - VPC and private subnet created above&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DZ_JCIhT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y8uqfzejh18n9novasxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DZ_JCIhT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y8uqfzejh18n9novasxh.png" alt="mysql db"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take a note on your mysql endpoint and password, will be using it later for env variable setup.&lt;/p&gt;
&lt;h3&gt;
  
  
  Elasticcache - Redis
&lt;/h3&gt;

&lt;p&gt;Take a note on your primary endpoint, will be using it later for env variable setup.&lt;/p&gt;
&lt;h2&gt;
  
  
  Compute instances
&lt;/h2&gt;

&lt;p&gt;We will start two ec2 instances. One for api in public subnet and another for worker in private sunet.&lt;/p&gt;
&lt;h3&gt;
  
  
  EC2 instance - API
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;configurations

&lt;ul&gt;
&lt;li&gt;AMI : Ubuntu image 20.04 LTS&lt;/li&gt;
&lt;li&gt;Network: public subnet&lt;/li&gt;
&lt;li&gt;auto-assign ip : enabled&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---AMQEUwN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j27obf0rlo0d6kuc62ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---AMQEUwN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j27obf0rlo0d6kuc62ut.png" alt="ec2 conf1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vxdHRdpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h8ihw4gb3hxgztwmkfrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vxdHRdpx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h8ihw4gb3hxgztwmkfrc.png" alt="ec2 conf2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1jx7u9Uy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vpu3ljoka1u6ojsfwpad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1jx7u9Uy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vpu3ljoka1u6ojsfwpad.png" alt="ec2 conf3"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Ec2 instance - worker
&lt;/h3&gt;

&lt;p&gt;This instance is basically same with &lt;code&gt;api&lt;/code&gt; instance but it needs to in private subnet. I advise making AMI from &lt;code&gt;api&lt;/code&gt; instance and fire up on private instance. Since there will be no public ip attached, you need to tunnel through &lt;code&gt;api&lt;/code&gt; or &lt;code&gt;bastion&lt;/code&gt; instance to connect the instance with private ip. (You can use &lt;a href="https://www.notion.so/kokospapa/How-to-deploy-django-app-to-ECS-Fargate-part2-93a372ebd8b941cea99d03de8ca98863#f8064c59d13c42b78a576173a979c34b"&gt;following method&lt;/a&gt; to connect directly since we have NAT gateway deployed)&lt;/p&gt;
&lt;h1&gt;
  
  
  Deployment
&lt;/h1&gt;

&lt;p&gt;Once the instance is up and running, we are ready to deploy docker containers.&lt;/p&gt;

&lt;p&gt;Let's ssh into to ec2 instance.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"/path/to/keyfile/"&lt;/span&gt; ubuntu@&amp;lt;public_ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Docker setup
&lt;/h3&gt;

&lt;p&gt;Next, you need to install docker engine and docker-compose. refer to following &lt;a href="https://docs.docker.com/engine/install/ubuntu/"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;install steps&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/engine/install/ubuntu/"&gt;Install Docker Engine on Ubuntu&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/compose/install/"&gt;Install Docker Compose&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/engine/install/linux-postinstall/"&gt;Post-installation steps for Linux&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# docker engine&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    apt-transport-https &lt;span class="se"&gt;\&lt;/span&gt;
    ca-certificates &lt;span class="se"&gt;\&lt;/span&gt;
    curl &lt;span class="se"&gt;\&lt;/span&gt;
    gnupg-agent &lt;span class="se"&gt;\&lt;/span&gt;
    software-properties-common
&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;add-apt-repository &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="s2"&gt;"deb [arch=amd64] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
   &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
   stable"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce docker-ce-cli containerd.io
&lt;span class="c"&gt;# docker-compose&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt;  &lt;span class="se"&gt;\ &lt;/span&gt;
&lt;span class="s2"&gt;"https://github.com/docker/compose/releases/download/1.26.0/docker-compose-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt;
&lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/docker-compose

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Environment variables
&lt;/h3&gt;

&lt;p&gt;You need to set couple env for staging since we are using redis and mysql.&lt;/p&gt;

&lt;p&gt;Please take a closer look at &lt;code&gt;settings/staging.py&lt;/code&gt; for &lt;code&gt;DB_USER&lt;/code&gt; and &lt;code&gt;DB_NAME&lt;/code&gt;. If they are different, update the file accordingly.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SECRET_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DB_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DB_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;REDIS_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Run containers
&lt;/h3&gt;

&lt;p&gt;Check out sources from git and run the docker-compose&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/kokospapa8/ecs-fargate-sample-app.git
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;ecs-fargate-sample-app
&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-staging-api.yml up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;span class="c"&gt;# add -d option for detach mode I explicitly left it out to see access logs&lt;/span&gt;
&lt;span class="c"&gt;# if you get error running docker-compose or docker, you probably don't have previlige to run docker as user, &lt;/span&gt;
&lt;span class="c"&gt;# refer to https://docs.docker.com/engine/install/linux-postinstall/&lt;/span&gt;

&lt;span class="c"&gt;# instead of building the whole images you can pull the image from ECR - https://docs.docker.com/compose/reference/pull/&lt;/span&gt;
&lt;span class="c"&gt;# in order to pull the image from repository make sure you add images filed to each docker-compose file - https://docs.docker.com/compose/compose-file/#image&lt;/span&gt;
&lt;span class="c"&gt;# &lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-staging-api.yml pull
&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-staging-api.yml up
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Once docker containers are up and running and no error shown, access public ip of your app instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://asciinema.org/a/tNnXTRKWU6u0572aS3GSNNo00"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u-KTCtid--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://asciinema.org/a/tNnXTRKWU6u0572aS3GSNNo00.svg" alt="asciicast"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Attach instace behind ALB
&lt;/h3&gt;

&lt;p&gt;Attach your instance to ALB's target group then you will start seeing ALB's health check on the log.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--827Gn2P2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qacmn2ijjtxz6yl2eva6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--827Gn2P2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qacmn2ijjtxz6yl2eva6.png" alt="target group"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  For worker instance
&lt;/h3&gt;

&lt;p&gt;Follow same step as previous step on docker setup or create another AMI image from api instance and start the instance in private subnet.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/kokospapa8/ecs-fargate-sample-app.git
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;ecs-fargate-sample-app
&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-staging-worker.yml up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  Caveats
&lt;/h1&gt;

&lt;p&gt;Everything checked out; in order to use this setup in production level, there are couple problems we need to address. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scaling services&lt;/li&gt;
&lt;li&gt;Updating sources for further development&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Scaling services
&lt;/h2&gt;

&lt;p&gt;We can create launch config and attach it to autoscaling group for automatic scaling. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/plans/userguide/what-is-aws-auto-scaling.html"&gt;What Is AWS Auto Scaling?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to prepare couple things for this to work&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each instances must have environment variable injected on startup. There are couple ways to do this&lt;/li&gt;
&lt;li&gt;Pull new source&lt;/li&gt;
&lt;li&gt;Run docker-compose on start&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are various way to accomplish this, you can search for other methods&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/49594391/aws-ec2-run-script-program-at-startup"&gt;Aws Ec2 run script program at startup&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Updating sources for further development
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Configuration changes - environment&lt;/li&gt;
&lt;li&gt;pulling new src from git&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Brute force
&lt;/h3&gt;

&lt;p&gt;Login into each instance and update the source and build image manualy&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ssh into each ec2 instance
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"/path/to/keyfile/"&lt;/span&gt; ubuntu@&amp;lt;public_ip&amp;gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git pull
&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-staging-api.yml up &lt;span class="nt"&gt;--build&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Obviously this is not a great idea if you have multiple instances of ec2 running. Also ip address change if instances are running behind autoscaling group.&lt;/p&gt;
&lt;h3&gt;
  
  
  Fabric
&lt;/h3&gt;

&lt;p&gt;You can use &lt;code&gt;fabric&lt;/code&gt; to alleviate such painful process. It's library designed to execute shell commands remotely over SSH. &lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.fabfile.org/"&gt;Fabric&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have provided sample fab file for accessing ec2 instances in public subnets.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_get_ec2_instances&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;instances&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connect_to_region&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;region_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ap-northeast-2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;#TODO beta env
&lt;/span&gt;        &lt;span class="n"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"AWS_ACCESS_KEY_ID"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"AWS_SECRET_ACCESS_KEY"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;reservations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_all_reservations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;'tag:Name'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s"&gt;'ecs-sample-api'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;'tag:env'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s"&gt;'staging'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;reservations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;boto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;EC2ResponseError&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;

    &lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;instances&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;instances&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;a href="https://github.com/kokospapa8/ecs-fargate-sample-app/blob/master/fabfile.py"&gt;kokospapa8/ecs-fargate-sample-app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you run fabfile, it uses boto library to traverse ec2 instance with tag named as &lt;code&gt;ecs-sample-api&lt;/code&gt;. Therefore you need to add  &lt;code&gt;ecs-sample-api&lt;/code&gt; for Name tag for each ec2 instances.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# you need to install fab and boto library&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;fabric, boto
&lt;span class="c"&gt;#edit fabfile.py &lt;/span&gt;
&lt;span class="c"&gt;# SSH_KEY= "PATH/TO/YOUR/SSH_KEY"&lt;/span&gt;
&lt;span class="c"&gt;# set env for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with ec2 describe policies&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;fab &lt;span class="nt"&gt;--list&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;fab gitpull
&lt;span class="nv"&gt;$ &lt;/span&gt;fab docker-restart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  Wrap up
&lt;/h1&gt;

&lt;p&gt;We are done for staging environment setup. We now have a docker containers running on cloud evironment. However there are couple concerns I woud like to address.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accessing each instances manually to update the source is not secure job&lt;/li&gt;
&lt;li&gt;I would like to address immutability to container deployment.&lt;/li&gt;
&lt;li&gt;I would like to fully utilize my compute node for containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's move to part3&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VW75tZgW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--B1QWRgg7--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/367659/1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part3-3i7p" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate part3&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul  3 ・ 12 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.concurrencylabs.com/blog/choose-your-aws-region-wisely/"&gt;Save yourself a lot of pain (and money) by choosing your AWS Region wisely&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html"&gt;VPC with public and private subnets (NAT)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.fabfile.org/"&gt;Fabric&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ko/blogs/security/securely-connect-to-linux-instances-running-in-a-private-amazon-vpc/"&gt;Securely Connect to Linux Instances Running in a Private Amazon VPC | Amazon Web Services&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html"&gt;What Is Amazon EC2 Auto Scaling?&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to deploy django app to ECS Fargate Part1</title>
      <dc:creator>Jinwook Baek</dc:creator>
      <pubDate>Fri, 03 Jul 2020 06:43:22 +0000</pubDate>
      <link>https://dev.to/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part1-2ga</link>
      <guid>https://dev.to/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part1-2ga</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post was originally published on notion. Click &lt;a href="https://www.notion.so/kokospapa/How-to-deploy-django-app-to-ECS-Fargate-Part1-a1e99c19b2a3423585e67f0b1ad81cbd"&gt;here&lt;/a&gt; if you prefer to read in notion page which has better readability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;This blog post illustrate development cycle using django app container. I assume readers are already somewhat familar with docker and docker-compose. Although I used django for app development, it is language-agnostic since the post is about containerized application deployment.&lt;/p&gt;

&lt;p&gt;Walkthough is devided into three parts consisting three different environemt respectively. First part, describes the architecture of the app(api and async worker) and how they are deployed on &lt;code&gt;local&lt;/code&gt; enviroment. Second part is how to deploy the docker containers on cloud using single ec2 instance with on &lt;code&gt;staging&lt;/code&gt; environment. Third part, illustrate how to convert traditional ec2 deployment into ECS using fargate with github actions on &lt;code&gt;prod&lt;/code&gt; environment.&lt;/p&gt;

&lt;p&gt;You can skip to next part  if you are already familiar with docker and AWS architectures.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VW75tZgW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--B1QWRgg7--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/367659/1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part2-io1" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate Part2&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul  3 ・ 10 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;
 
&lt;h1&gt;
  
  
  Setup
&lt;/h1&gt;

&lt;p&gt;First we need to prepare application which is development ready.&lt;/p&gt;

&lt;p&gt;This application consist 3 docker containers&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;nginx&lt;/code&gt; - web server&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;app&lt;/code&gt; - Api server&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;worker&lt;/code&gt;- Async worker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;app&lt;/code&gt; and &lt;code&gt;worker&lt;/code&gt; share same base images, main difference is that &lt;code&gt;app&lt;/code&gt; serves http request behind &lt;code&gt;nginx&lt;/code&gt; web server using gunicorn and wsgi. Throughout this walkthough &lt;code&gt;nginx&lt;/code&gt; and &lt;code&gt;app&lt;/code&gt; container will share a common lifecycle. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;worker&lt;/code&gt; runs asyncronously using redis queue as broker using &lt;a href="https://github.com/rq/django-rq"&gt;Django-RQ&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5J_CUrM6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k5fxgkowa3dmsl4t2pe7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5J_CUrM6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k5fxgkowa3dmsl4t2pe7.png" alt="local docker diagram"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Download source
&lt;/h2&gt;

&lt;p&gt;You can download the app source code here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kokospapa8/ecs-fargate-sample-app"&gt;github: kokospapa8/ecs-fargate-sample-app&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker Containers
&lt;/h2&gt;
&lt;h3&gt;
  
  
  APP
&lt;/h3&gt;

&lt;p&gt;Base app image&lt;/p&gt;

&lt;p&gt;Since &lt;code&gt;app&lt;/code&gt; and &lt;code&gt;worker&lt;/code&gt; both use same base image and pip takes quite some time to build image, I have uploaded base image in the dockerhub repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hub.docker.com/r/kokospapa8/django-sample/tags"&gt;Docker Hub&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Creating image based on official python3 image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.6&lt;/span&gt;

&lt;span class="c"&gt;# Your contacts, so people blame you afterwards&lt;/span&gt;
&lt;span class="k"&gt;MAINTAINER&lt;/span&gt;&lt;span class="s"&gt; Jinwook Baek &amp;lt;kokos.papa8@gmail.com&amp;gt;&lt;/span&gt;

&lt;span class="c"&gt;# Sets dumping log messages directly to stream instead of buffering&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONUNBUFFERED 1&lt;/span&gt;

&lt;span class="c"&gt;# Creating and putting configurations&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /config
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; config/app /config/&lt;/span&gt;

&lt;span class="c"&gt;# Installing all python dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /config/requirements.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;APP&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; kokospapa8/django-sample:base&lt;/span&gt;

&lt;span class="c"&gt;# Installing all python dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /config/requirements.txt

&lt;span class="c"&gt;# Open port 8000 to outside world&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8000&lt;/span&gt;

&lt;span class="c"&gt;# When container starts, this script will be executed.&lt;/span&gt;
&lt;span class="c"&gt;# Note that it is NOT executed during building&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["sh", "/config/django_app.sh"]&lt;/span&gt;

&lt;span class="c"&gt;# Creating and putting application inside container&lt;/span&gt;
&lt;span class="c"&gt;# and setting it to working directory (meaning it is going to be default)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /ecs-sample
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /ecs-sample&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; ecs-sample /ecs-sample/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Worker&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; kokospapa8/django-sample:base&lt;/span&gt;

&lt;span class="c"&gt;# Installing all python dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /config/requirements.txt

&lt;span class="c"&gt;# When container starts, this script will be executed.&lt;/span&gt;
&lt;span class="c"&gt;# Note that it is NOT executed during building&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["sh", "/config/django_worker.sh"]&lt;/span&gt;

&lt;span class="c"&gt;# Creating and putting application inside container&lt;/span&gt;
&lt;span class="c"&gt;# and setting it to working directory (meaning it is going to be default)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /ecs-sample
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /ecs-sample&lt;/span&gt;
&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; ecs-sample /ecs-sample/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Nginx config
&lt;/h3&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app_local.conf
server {
  listen 80;

  # all requests proxies to app
  location / {
        proxy_pass http://app:8000;
    }

  # domain localhost
  server_name localhost;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Enviroment
&lt;/h3&gt;

&lt;p&gt;As you can see from the &lt;a href="https://github.com/kokospapa8/ecs-fargate-sample-app/blob/master/ecs-sample/settings/secrets.py"&gt;source code&lt;/a&gt; you need to set some sensitive data into env&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# settings/secrets.py
&lt;/span&gt;&lt;span class="n"&gt;SECRET_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_env_variable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"SECRET_KEY"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                        &lt;span class="p"&gt;......&lt;/span&gt;
&lt;span class="n"&gt;REDIS_HOST&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_env_variable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"REDIS_HOST"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="o"&gt;----&lt;/span&gt;

&lt;span class="c1"&gt;# export SECRET_KEY = '' #only SECRET_KEY is needed to be set for local dev
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Docker compose
&lt;/h3&gt;

&lt;p&gt;docker-compose-local.yml&lt;/p&gt;

&lt;p&gt;Just for local environment, we are using sqlite and &lt;code&gt;redis&lt;/code&gt; docker container to mimic cloud enviroment. We will be using RDS-mysql and ElasticCache-redis on &lt;code&gt;staging&lt;/code&gt; and &lt;code&gt;production&lt;/code&gt; environment.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# File structure version&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Our django application&lt;/span&gt;
  &lt;span class="c1"&gt;# Build from remote dockerfile&lt;/span&gt;
  &lt;span class="c1"&gt;# Connect local app folder with image folder, so changes will be pushed to image instantly&lt;/span&gt;
  &lt;span class="c1"&gt;# Open port 8000&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config/app/Dockerfile_app&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./ecs-sample:/ecs-sample&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8000"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SECRET_KEY&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ENV=dev&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DJANGO_SETTINGS_MODULE=settings.local&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_HOST=redis&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;

  &lt;span class="na"&gt;worker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config/app/Dockerfile_worker&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;worker&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./ecs-sample:/ecs-sample&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SECRET_KEY&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ENV=dev&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DJANGO_SETTINGS_MODULE=settings.local&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;REDIS_HOST=redis&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;

  &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:5.0.5&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6379"&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;

  &lt;span class="na"&gt;nginx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;80:80"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./config/nginx/app_local.conf:/etc/nginx/conf.d/app_local.conf&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Let's run the containers&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-local.yml up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;a href="https://asciinema.org/a/Wv7ogrpWRNVNRF9f6NdtPOW7G"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kJw-MpLr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://asciinema.org/a/Wv7ogrpWRNVNRF9f6NdtPOW7G.svg" alt="asciicast"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Check result
&lt;/h3&gt;

&lt;p&gt;Type in following url to see if you are getting correct response.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://localhost/api/healthcheck/"&gt;http://localhost/api/healthcheck/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://localhost/api/v1/posts/"&gt;http://localhost/api/v1/posts/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://localhost/admin/posts/"&gt;http://localhost/admin/posts/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://localhost/django-rq/"&gt;http://localhost/django-rq/&lt;/a&gt;   -  to check async worker
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Moving on
&lt;/h1&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kokospapa8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VW75tZgW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--B1QWRgg7--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/367659/1067d331-541f-4ec9-b7dc-c49540b89ff9.jpg" alt="kokospapa8 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/kokospapa8/how-to-deploy-django-app-to-ecs-fargate-part2-io1" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to deploy django app to ECS Fargate Part2&lt;/h2&gt;
      &lt;h3&gt;Jinwook Baek ・ Jul  3 ・ 10 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;h1&gt;
  
  
  Reference
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/compose/"&gt;Overview of Docker Compose&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04"&gt;How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 16.04 | DigitalOcean&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/engine/reference/builder/"&gt;Dockerfile reference&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/"&gt;Dockerizing Django with Postgres, Gunicorn, and Nginx&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
