<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Van Hoang Kha</title>
    <description>The latest articles on DEV Community by Van Hoang Kha (@vanhoangkha14052000).</description>
    <link>https://dev.to/vanhoangkha14052000</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vanhoangkha14052000"/>
    <language>en</language>
    <item>
      <title>AWS Bookstore Demo App</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Fri, 21 Oct 2022 10:08:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-bookstore-demo-app-4aef</link>
      <guid>https://dev.to/aws-builders/aws-bookstore-demo-app-4aef</guid>
      <description>&lt;h2&gt;
  
  
  AWS Bookstore Demo App
&lt;/h2&gt;

&lt;p&gt;AWS Bookstore Demo App is a full-stack sample web application that creates a storefront (and backend) for customers to shop for fictitious books. The entire application can be created with a single CloudFormation template. &lt;strong&gt;&lt;a href="https://d2h3ljlsmzojxz.cloudfront.net/"&gt;Try out the deployed application here&lt;/a&gt;&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;You can browse and search for books, look at recommendations and best sellers, manage your cart, checkout, view your orders, and more.  Get started with building your own below!&lt;br&gt;
 &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Outline
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Overview&lt;/li&gt;
&lt;li&gt;
Instructions

&lt;ul&gt;
&lt;li&gt;Getting started&lt;/li&gt;
&lt;li&gt;Cleaning up&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Architecture&lt;/li&gt;
&lt;li&gt;
Implementation details

&lt;ul&gt;
&lt;li&gt;Amazon DynamoDB&lt;/li&gt;
&lt;li&gt;Amazon API Gateway&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;Amazon ElastiCache for Redis&lt;/li&gt;
&lt;li&gt;Amazon Neptune&lt;/li&gt;
&lt;li&gt;Amazon ElasticSearch&lt;/li&gt;
&lt;li&gt;AWS IAM&lt;/li&gt;
&lt;li&gt;Amazon Cognito&lt;/li&gt;
&lt;li&gt;Amazon Cloudfront and Amazon S3&lt;/li&gt;
&lt;li&gt;Amazon VPC&lt;/li&gt;
&lt;li&gt;Amazon Cloudwatch&lt;/li&gt;
&lt;li&gt;AWS CodeCommit, AWS CodePipeline, AWS CodeBuild&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Running your web application locally&lt;/li&gt;
&lt;li&gt;Considerations for demo purposes&lt;/li&gt;
&lt;li&gt;Known limitations&lt;/li&gt;
&lt;li&gt;Additions, forks, and contributions&lt;/li&gt;
&lt;li&gt;Questions and contact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The goal of AWS Bookstore Demo App is to provide a fully-functional web application that utilizes multiple purpose-built AWS databases and native AWS components like Amazon API Gateway and AWS CodePipeline. Increasingly, modern web apps are built using a multitude of different databases. Developers break their large applications into individual components and select the best database for each job. Let's consider AWS Bookstore Demo App as an example. The app contains multiple experiences such a shopping cart, product search, recommendations, and a top sellers list. For each of these use cases, the app makes use of a purpose-built database so the developer never has to compromise on functionality, performance, or scale. &lt;/p&gt;

&lt;p&gt;The provided CloudFormation template automates the entire creation and deployment of AWS Bookstore Demo App.  The template includes the following components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database components&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product catalog/shopping cart - Amazon DynamoDB offers fast, predictable performance for the key-value lookups needed in the product catalog, as well as the shopping cart and order history.  In this implementation, we have unique identifiers, titles, descriptions, quantities, locations, and price.&lt;/li&gt;
&lt;li&gt;Search - Amazon Elasticsearch Service enables full-text search for our storefront, enabling users to find products based on a variety of terms including author, title, and category.&lt;/li&gt;
&lt;li&gt;Recommendations - Amazon Neptune provides social recommendations based on what user's friends have purchased, scaling as the storefront grows with more products, pages, and users.&lt;/li&gt;
&lt;li&gt;Top sellers list - Amazon ElastiCache for Redis reads order information from Amazon DynamoDB Streams, creating a leaderboard of the “Top 20” purchased or rated books.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Application components&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serverless service backend – Amazon API Gateway powers the interface layer between the frontend and backend, and invokes serverless compute with AWS Lambda.
&lt;/li&gt;
&lt;li&gt;Web application blueprint – We include a React web application pre-integrated out-of-the-box with tools such as React Bootstrap, Redux, React Router, internationalization, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure components&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuous deployment code pipeline – AWS CodePipeline and AWS CodeBuild help you build, test, and release your application code. &lt;/li&gt;
&lt;li&gt;Serverless web application – Amazon CloudFront and Amazon S3 provide a globally-distributed application. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can choose to customize the template to create your own bookstore, modify it to make a different type of store, or change it to make a completely different type of web application.  &lt;/p&gt;

&lt;p&gt;AWS Bookstore Demo App is built on-top of &lt;strong&gt;&lt;a href="https://github.com/awslabs/aws-full-stack-template"&gt;AWS Full-Stack Template&lt;/a&gt;&lt;/strong&gt;, which provides the foundational services, components, and plumbing needed to get a basic web application up and running. Users can build on top of AWS Full-Stack Template to create any application they envision, whether a travel booking tool, a blog, or another web app.  This AWS Bookstore Demo App is just one example of what you can create using AWS Full-Stack Template. &lt;/p&gt;

&lt;p&gt;Watch the recorded talk and demo &lt;a href="https://youtu.be/-pb-DkD6cWg?t=1309"&gt;here&lt;/a&gt;. &lt;br&gt;
 &lt;/p&gt;



&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;**IMPORTANT NOTE&lt;/em&gt;&lt;em&gt;: Creating this demo application in your AWS account will create and consume AWS resources, which **will cost money&lt;/em&gt;&lt;em&gt;.  We estimate that running this demo application will cost ~&lt;/em&gt;&lt;em&gt;$0.45/hour&lt;/em&gt;* with light usage.  Be sure to shut down/remove all resources once you are finished to avoid ongoing charges to your AWS account (see instructions on cleaning up/tear down below).*&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  Getting started
&lt;/h3&gt;

&lt;p&gt;To get AWS Bookstore Demo App up and running in your own AWS account, follow these steps (if you do not have an AWS account, please see &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/"&gt;How do I create and activate a new Amazon Web Services account?&lt;/a&gt;):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into the &lt;a href="https://console.aws.amazon.com/"&gt;AWS console&lt;/a&gt; if you are not already.
&lt;em&gt;Note: If you are logged in as an IAM user, ensure your account has permissions to create and manage the necessary resources and components for this application.&lt;/em&gt; &lt;/li&gt;
&lt;li&gt;Choose one of the &lt;strong&gt;Launch Stack&lt;/strong&gt; buttons below for your desired AWS region to open the AWS CloudFormation console and create a new stack. AWS Bookstore Demo App is supported in the following regions:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Region name&lt;/th&gt;
&lt;th&gt;Region code&lt;/th&gt;
&lt;th&gt;Launch&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;US East (N. Virginia)&lt;/td&gt;
&lt;td&gt;us-east-1&lt;/td&gt;
&lt;td&gt;&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=MyBookstore&amp;amp;templateURL=https://s3.amazonaws.com/aws-bookstore-demo-app-us-east-1/master-fullstack.yaml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uH7ENuuA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.rawgit.com/buildkite/cloudformation-launch-stack-button-svg/master/launch-stack.svg" alt="Launch Stack" width="144" height="27"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US West (Oregon)&lt;/td&gt;
&lt;td&gt;us-west-2&lt;/td&gt;
&lt;td&gt;&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=MyBookstore&amp;amp;templateURL=https://s3.amazonaws.com/aws-bookstore-demo-app-us-west-2/master-fullstack.yaml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uH7ENuuA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.rawgit.com/buildkite/cloudformation-launch-stack-button-svg/master/launch-stack.svg" alt="Launch Stack" width="144" height="27"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EU (Ireland)&lt;/td&gt;
&lt;td&gt;eu-west-1&lt;/td&gt;
&lt;td&gt;&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/new?stackName=MyBookstore&amp;amp;templateURL=https://s3.amazonaws.com/aws-bookstore-demo-app-eu-west-1/master-fullstack.yaml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uH7ENuuA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.rawgit.com/buildkite/cloudformation-launch-stack-button-svg/master/launch-stack.svg" alt="Launch Stack" width="144" height="27"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EU (Frankfurt)&lt;/td&gt;
&lt;td&gt;eu-central-1&lt;/td&gt;
&lt;td&gt;&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-central-1#/stacks/new?stackName=MyBookstore&amp;amp;templateURL=https://s3.amazonaws.com/aws-bookstore-demo-app-eu-central-1/master-fullstack.yaml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uH7ENuuA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.rawgit.com/buildkite/cloudformation-launch-stack-button-svg/master/launch-stack.svg" alt="Launch Stack" width="144" height="27"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Continue through the CloudFormation wizard steps

&lt;ol&gt;
&lt;li&gt;Name your stack, e.g. MyBookstore&lt;/li&gt;
&lt;li&gt;Name your S3 bucket (must be lowercase and has to be unique across all existing bucket names in Amazon S3).  See &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev//BucketRestrictions.html#bucketnamingrules"&gt;bucket naming rules&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Provide a project name (must be lowercase, letters only, and &lt;strong&gt;under twelve (12) characters&lt;/strong&gt;).  This is used when naming your resources, e.g. tables, search domain, etc.&lt;/li&gt;
&lt;li&gt;After reviewing, check the blue box for creating IAM resources.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Create stack&lt;/strong&gt;.  This will take ~20 minutes to complete.&lt;/li&gt;
&lt;li&gt;Once the CloudFormation deployment is complete, check the status of the build in the &lt;a href="https://console.aws.amazon.com/codesuite/codepipeline/pipelines"&gt;CodePipeline&lt;/a&gt; console and ensure it has succeeded.&lt;/li&gt;
&lt;li&gt;Sign into your application 

&lt;ol&gt;
&lt;li&gt;The output of the CloudFormation stack creation will provide a CloudFront URL (in the &lt;strong&gt;Outputs&lt;/strong&gt; table of your stack details page).  Click the link or copy and paste the CloudFront URL into your browser.&lt;/li&gt;
&lt;li&gt;You can sign into your application by registering an email address and a password.  Choose &lt;strong&gt;Sign up to explore the demo&lt;/strong&gt; to register.  The registration/login experience is run in your AWS account, and the supplied credentials are stored in Amazon Cognito.
&lt;em&gt;Note: given that this is a demo application, we highly suggest that you do not use an email and password combination that you use for other purposes (such as an AWS account, email, or e-commerce site).&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Once you provide your credentials, you will receive a verification code at the email address you provided. Upon entering this verification code, you will be signed into the application.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Advanced: The source CloudFormation template is available &lt;a href="https://s3.amazonaws.com/aws-bookstore-demo/master-fullstack.yaml"&gt;here&lt;/a&gt;. If you want to maintain low latency for your app, &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=MyBookstore&amp;amp;templateURL=https://s3.amazonaws.com/aws-bookstore-demo/master-fullstack-with-lambda-warmers.yaml"&gt;this deeplink&lt;/a&gt; will create an identical stack, but with additional triggers to keep the Lamdba functions "warm" (CloudFormation template &lt;a href="https://s3.amazonaws.com/aws-bookstore-demo/master-fullstack-with-lambda-warmers.yaml"&gt;here&lt;/a&gt;).  For more information, see the Considerations for demo purposes section.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  Cleaning up
&lt;/h3&gt;

&lt;p&gt;To tear down your application and remove all resources associated with AWS Bookstore Demo App, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into the &lt;a href="https://console.aws.amazon.com/s3"&gt;Amazon S3 Console&lt;/a&gt; and  delete the buckets created for the demo app.

&lt;ul&gt;
&lt;li&gt;There should be two buckets created for AWS Bookstore Demo App.  The buckets will be titled "X" and "X-pipeline", where "X" is the name you specified in the CloudFormation wizard under the AssetsBucketName parameter.
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note: Please be **very careful&lt;/em&gt;* to only delete the buckets associated with this app that you are absolutely sure you want to delete.*&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Log into the AWS CloudFormation Console and find the stack you created for the demo app&lt;/li&gt;
&lt;li&gt;Delete the stack&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Remember to shut down/remove all related resources once you are finished to avoid ongoing charges to your AWS account.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;



&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Summary diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N9iQd0ju--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7x8c6xtmtlpwsgehkin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N9iQd0ju--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7x8c6xtmtlpwsgehkin.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-level, end-to-end diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rSNfrtE3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xpg0t71msxgwh1j7lyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rSNfrtE3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xpg0t71msxgwh1j7lyz.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Build artifacts are stored in a S3 bucket where web application assets are maintained (like book cover photos, web graphics, etc.). Amazon CloudFront caches the frontend content from S3, presenting the application to the user via a CloudFront distribution.  The frontend interacts with Amazon Cognito and Amazon API Gateway only.  Amazon Cognito is used for all authentication requests, whereas API Gateway (and Lambda) is used for all API calls interacting across DynamoDB, Elasticsearch, ElastiCache, and Neptune. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The core of the backend infrastructure consists of Amazon Cognito, Amazon DynamoDB, AWS Lambda, and Amazon API Gateway. The application leverages Amazon Cognito for user authentication, and Amazon DynamoDB to store all of the data for books, orders, and the checkout cart. As books and orders are added, Amazon DynamoDB Streams push updates to AWS Lambda functions that update the Amazon Elasticsearch cluster and Amazon ElasticCache for Redis cluster.  Amazon Elasticsearch powers search functionality for books, and Amazon Neptune stores information on a user's social graph and book purchases to power recommendations. Amazon ElasticCache for Redis powers the books leaderboard. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3TeAGRLE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o1t5hn8hkl73n9zhjpo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3TeAGRLE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o1t5hn8hkl73n9zhjpo9.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer Tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The code is hosted in AWS CodeCommit. AWS CodePipeline builds the web application using AWS CodeBuild. After successfully building, CodeBuild copies the build artifacts into a S3 bucket where the web application assets are maintained (like book cover photos, web graphics, etc.). Along with uploading to Amazon S3, CodeBuild invalidates the cache so users always see the latest experience when accessing the storefront through the Amazon CloudFront distribution.  AWS CodeCommit. AWS CodePipeline, and AWS CodeBuild are used in the deployment and update processes only, not while the application is in a steady-state of use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gSbsDSyf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orriog072lh9x1aowq0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gSbsDSyf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orriog072lh9x1aowq0b.png" alt="Image description" width="880" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;



&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Implementation details
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Note: The provided CloudFormation template contains only a portion of the resources needed to create and run the application.  There are web assets (images, etc.), Lambda functions, and other resources called from the template to create the full experience.  These resources are stored in a public-facing S3 bucket and referenced in the template.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  Amazon DynamoDB
&lt;/h3&gt;

&lt;p&gt;The backend of AWS Bookstore Demo App leverages Amazon DynamoDB to enable dynamic scaling and the ability to add features as we rapidly improve our e-commerce application. The application create three tables in DynamoDB: Books, Orders, and Cart.  DynamoDB's primary key consists of a partition (hash) key and an optional sort (range) key. The primary key (partition and sort key together) must be unique.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Books Table:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;BooksTable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;primary&lt;/span&gt; &lt;span class="nx"&gt;partition&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;GSI&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;cover&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; 
  &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
  &lt;span class="nx"&gt;rating&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The table's partition key is the ID attribute of a book. The partition key allows you to look up a book with just the ID. Additionally, there is a global secondary index (GSI) on the category attribute. The GSI allows you to run a query on the category attribute and build the books by category experience. &lt;/p&gt;

&lt;p&gt;For future updates to the application, we plan to return the results of a search/filter by category via Elasticsearch.  Additionally, there is no “description” attribute, as this sample application does not feature pages for individual books.  This may be something users wish to add.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orders Table:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;OrdersTable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;primary&lt;/span&gt; &lt;span class="nx"&gt;partition&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;orderId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;primary&lt;/span&gt; &lt;span class="nx"&gt;sort&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;books&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bookDetail&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="nx"&gt;orderDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;date&lt;/span&gt; 
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;bookDetail&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The order table's partition key is the customer ID. This allows us to look up all orders of the customer with just their ID. &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cart Table:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;CartTable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;primary&lt;/span&gt; &lt;span class="nx"&gt;partition&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;primary&lt;/span&gt; &lt;span class="nx"&gt;sort&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cart table stores information about a customer's saved cart.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon API Gateway
&lt;/h3&gt;

&lt;p&gt;Amazon API Gateway acts as the interface layer between the frontend (Amazon CloudFront, Amazon S3) and AWS Lambda, which calls the backend (databases, etc.). Below are the different APIs the application uses:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Books (DynamoDB)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GET /books (ListBooks)&lt;br&gt;&lt;br&gt;
GET /books/{:id} (GetBook)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cart (DynamoDB)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GET /cart (ListItemsInCart)&lt;br&gt;&lt;br&gt;
POST /cart (AddToCart)&lt;br&gt;&lt;br&gt;
PUT /cart (UpdateCart)&lt;br&gt;&lt;br&gt;
DELETE /cart (RemoveFromCart)&lt;br&gt;&lt;br&gt;
GET /cart/{:bookId} (GetCartItem)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orders (DynamoDB)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GET /orders (ListOrders)&lt;br&gt;&lt;br&gt;
POST /orders (Checkout)  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Sellers (ElastiCache)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GET /bestsellers (GetBestSellers)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommendations (Neptune)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GET /recommendations (GetRecommendations)&lt;br&gt;&lt;br&gt;
GET /recommendations/{bookId} (GetRecommendationsByBook)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Search (Elasticsearch)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GET /search (SearchES)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  AWS Lambda
&lt;/h3&gt;

&lt;p&gt;AWS Lambda is used in a few different places to run the application, as shown in the architecture diagram.  The important Lambda functions that are deployed as part of the template are shown below, and available in the &lt;a href="https://dev.to/functions"&gt;functions&lt;/a&gt; folder.  In the cases where the response fields are blank, the application will return a statusCode 200 or 500 for success or failure, respectively.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ListBooks&lt;/strong&gt;&lt;br&gt;
Lambda function that lists the books in the specified product category&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ListBooksRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;optional&lt;/span&gt; &lt;span class="nx"&gt;parameter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ListBooksResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;books&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;book&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;book&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; 
    &lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;rating&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;cover&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GetBook&lt;/strong&gt;&lt;br&gt;
Lambda function that will return the properties of a book.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetBookRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetBookResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; 
    &lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;rating&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;cover&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ListItemsInCart&lt;/strong&gt;&lt;br&gt;
Lambda function that lists the orders a user has placed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ListItemsInCartRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ListItemsInCartResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;order&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AddToCart&lt;/strong&gt;&lt;br&gt;
Lambda function that adds a specified book to the user's cart.  Price is included in this function's request so that the price is passed into the cart table in DynamoDB.  This could reflect that the price in the cart may be different than the price in the catalog (i.e. books table) perhaps due to discounts or coupons.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;AddToCartRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;AddToCartResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RemoveFromCart&lt;/strong&gt;&lt;br&gt;
Lambda function that removes a given book from the user's cart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;RemoveFromCartRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;RemoveFromCartResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GetCartItem&lt;/strong&gt;&lt;br&gt;
Lambda function that returns the details of a given item the user's cart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetCartItemRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetCartItemResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UpdateCart&lt;/strong&gt;&lt;br&gt;
Lambda function that updates the user's cart with a new quantity of a given book.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;UpdateCartRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;UpdateCartResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ListOrders&lt;/strong&gt;&lt;br&gt;
Lambda function that lists the orders for a user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ListOrdersRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ListOrdersResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; 
    &lt;span class="nx"&gt;orderId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;orderDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;date&lt;/span&gt;
    &lt;span class="nx"&gt;books&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bookDetail&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;bookDetail&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checkout&lt;/strong&gt;&lt;br&gt;
Lambda function that moves the contents of a user's cart (the books) into the checkout flow, where you can then integrate with payment, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;CheckoutRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;books&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bookDetail&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;bookDetail&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
    &lt;span class="nx"&gt;quantity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;CheckoutResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition to the above, the &lt;em&gt;Checkout&lt;/em&gt; Lambda function acts as a sort of mini-workflow with the following tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add all books from the Cart table to the Orders table&lt;/li&gt;
&lt;li&gt;Remove all entries from the Cart table for the requested customer ID&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GetBestSellers&lt;/strong&gt;&lt;br&gt;
Lambda function that returns a list of the best-sellers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetBestSellersRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetBestSellersResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookIds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GetRecommendations&lt;/strong&gt;&lt;br&gt;
Lambda function that returns a list of recommended books based on the purchase history of a user's friends.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetRecommendationsRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetRecommendationsResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;recommendations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;recommendation&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;recommendation&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
    &lt;span class="nx"&gt;friendsPurchased&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="nx"&gt;purchases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GetRecommendationsByBook&lt;/strong&gt;&lt;br&gt;
Lambda function that returns a list of friends who have purchased this book as well as the total number of times it was purchased by those friends.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetRecommendationsByBookRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;bookId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;GetRecommendationsByBookResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;friendsPurchased&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="nx"&gt;purchased&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;customerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other Lambda functions&lt;/strong&gt;&lt;br&gt;
There are a few other Lambda functions used to make AWS Bookstore Demo App work, and they are listed here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search - Lambda function that returns a list of books based on provided search parameters in the request.&lt;/li&gt;
&lt;li&gt;updateSearchCluster - Lambda function that updates the Elasticsearch cluster when new books are added to the store.&lt;/li&gt;
&lt;li&gt;updateBestsellers - Updates Leaderboard via the ElastiCache for Redis cluster as orders are placed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon ElastiCache for Redis
&lt;/h3&gt;

&lt;p&gt;Amazon ElastiCache for Redis is used to provide the best sellers/leaderboard functionality.  In other words, the books that are the most ordered will be shown dynamically at the top of the best sellers list. &lt;/p&gt;

&lt;p&gt;For the purposes of creating the leaderboard, AWS Bookstore Demo App utilized &lt;a href="https://redis.io/commands/zincrby"&gt;ZINCRBY&lt;/a&gt;, which &lt;em&gt;“Increments the score of member in the sorted set stored at key byincrement. If member does not exist in the sorted set, it is added with increment as its score (as if its previous score was 0.0). If key does not exist, a new sorted set with the specified member as its sole member is created.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The information to populate the leaderboard is provided from DynamoDB via DynamoDB Streams.  Whenever an order is placed (and subsequently created in the &lt;strong&gt;Orders&lt;/strong&gt; table), this is streamed to Lambda, which updates the cache in ElastiCache for Redis.  The Lambda function used to pass this information is &lt;strong&gt;UpdateBestSellers&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Neptune
&lt;/h3&gt;

&lt;p&gt;Neptune provides a social graph that consists of users, books.  Recommendations are only provided for books that have been purchased (i.e. in the list of orders). The “top 5” book recommendations are shown on the bookstore homepage. &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Elasticsearch
&lt;/h3&gt;

&lt;p&gt;Amazon Elasticsearch Service powers the search capability in the bookstore web application, available towards the top of every screen in a search bar.  Users can search by title, author, and category. The template creates a search domain in the Elasticsearch service.&lt;/p&gt;

&lt;p&gt;It is important that a service-linked role is created first (included in the CloudFormation template).&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  AWS IAM
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;ListBooksLambda&lt;/strong&gt;&lt;br&gt;
AWSLambdaBasicExecutionRole&lt;br&gt;&lt;br&gt;
dynamodb:Scan - table/Books/index/category-index&lt;br&gt;&lt;br&gt;
dynamodb:Query - table/Books&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GetBookLambda&lt;/strong&gt;&lt;br&gt;
AWSLambdaBasicExecutionRole&lt;br&gt;&lt;br&gt;
dynamodb:GetItem - table/Books&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ListItemsInCartLambda&lt;/strong&gt;&lt;br&gt;
AWSLambdaBasicExecutionRole&lt;br&gt;&lt;br&gt;
dynamodb:Query - table/Cart&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AddToCartLambda&lt;/strong&gt;&lt;br&gt;
AWSLambdaBasicExecutionRole&lt;br&gt;&lt;br&gt;
dynamodb:PutItem - table/Cart&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UpdateCartLambda&lt;/strong&gt;&lt;br&gt;
AWSLambdaBasicExecutionRole&lt;br&gt;&lt;br&gt;
dynamodb:UpdateItem - table/Cart&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ListOrdersLambda&lt;/strong&gt;&lt;br&gt;
AWSLambdaBasicExecutionRole&lt;br&gt;&lt;br&gt;
dynamodb:Query - table/Orders&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CheckoutLambda&lt;/strong&gt;&lt;br&gt;
AWSLambdaBasicExecutionRole&lt;br&gt;&lt;br&gt;
dynamodb:PutItem - table/Orders&lt;br&gt;&lt;br&gt;
dynamoDB:DeleteItem - table/Cart&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Cognito
&lt;/h3&gt;

&lt;p&gt;Amazon Cognito handles user account creation and login for the bookstore application.  For the purposes of the demo, the bookstore is only available to browse after login, which could represent the architecture of different types of web apps.  Users can also choose to separate the architecture, where portions of the web app are publicly available and others are available upon login.&lt;/p&gt;

&lt;p&gt;User Authentication&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email address&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amazon Cognito passes the CognitoIdentityID (which AWS Bookstore Demo app uses as the Customer ID) for every user along with every request from Amazon API Gateway to Lambda, which helps the services authenticate against which user is doing what.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon CloudFront and Amazon S3
&lt;/h3&gt;

&lt;p&gt;Amazon CloudFront hosts the web application frontend that users interface with.  This includes web assets like pages and images.  For demo purposes, CloudFormation pulls these resources from S3.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon VPC
&lt;/h3&gt;

&lt;p&gt;Amazon VPC (Virtual Private Cloud) is used with Amazon Elasticsearch Service, Amazon ElastiCache for Redis, and Amazon Neptune.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon CloudWatch
&lt;/h3&gt;

&lt;p&gt;The capabilities provided by CloudWatch are not exposed to the end users of the web app, rather the developer/administrator can use CloudWatch logs, alarms, and graphs to track the usage and performance of their web application.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CodeCommit, AWS CodePipeline, AWS CodeBuild
&lt;/h3&gt;

&lt;p&gt;Similar to CloudWatch, the capabilities provided by CodeCommit, CodePipeline, and CodeBuild are not exposed to the end users of the web app.  The developer/administrator can use these tools to help stage and deploy the application as it is updated and improved.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Running your web application locally
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;If you haven't setup Git credentials for AWS CodeCommit before, head to the IAM Console. If you have already you can skip to step 5. &lt;/li&gt;
&lt;li&gt;Choose your IAM user.&lt;/li&gt;
&lt;li&gt;Choose the &lt;strong&gt;Security credentials&lt;/strong&gt; tab. Scroll to the bottom and choose &lt;strong&gt;Generate&lt;/strong&gt; underneath &lt;strong&gt;HTTPS Git credentials for AWS CodeCommit&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download and save these credentials. You will use these credentials when cloning your repository. &lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to the CodeCommit console and find your code repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the HTTPS button underneath the &lt;strong&gt;Clone URL&lt;/strong&gt; column. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open up your terminal, type &lt;code&gt;git clone&lt;/code&gt; paste the Clone URL and hit enter. &lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the repository has created, run &lt;code&gt;npm install&lt;/code&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After all dependencies have been downloaded, run &lt;code&gt;npm run start&lt;/code&gt;.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You're done! Any future updates you make to your repository will get pushed to your code pipeline automatically and published to your web application endpoint. &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Considerations for demo purposes
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In order to make AWS Bookstore Demo App an effective demonstration from the moment it is created, the CloudFormation template kicks off a Lambda function we wrote to pre-load a list of books into the product catalog (the Books table in DynamoDB).  In the same way, we used a Lambda function to pre-load sample friends (into Neptune) and manually populated the list of Best Sellers (on the front page only).  This enables you to sign up as a new user and immediately see what the running store would look like, including recommendations based on what friends have purchased and what the best-selling books section does.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will notice that the Past orders and Best sellers pages are empty at first run.  These are updated as soon as an order is placed. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the purposes of this demo, we did not include a method to add or remove friends, and decided that every new user will be friends with everyone else (not the most realistic, but effective for this demo).  You are welcome to play around with changing this, adding friend control functionality, or manually editing friendships via the bookstore-friends-edges.csv file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Web assets (pages, images, etc.) are pulled from a public S3 bucket via the CloudFormation template to create the frontend for AWS Bookstore Demo App.  When building your own web application (or customizing this one), you will likely pull from your own S3 buckets.  If you customize the lambda functions, you will want to store these separately, as well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checkout is a simplified demo experience that customers can take and implement a real-world payment processing platform.  Similarly, the &lt;em&gt;View Receipt&lt;/em&gt; button after purchase is non-functional, meant to demonstrate how you can add on to the app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CloudFormation template referenced in #2 of the Getting started section is everything you need to create the full-stack application.  However, when the application is newly created, or hasn't been used in some time, it may take a few extra seconds to run the Lamdba functions, which increases the latency of operations like search and listing books.  If you want to maintain low latency for your app, &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=MyBookstore&amp;amp;templateURL=https://s3.amazonaws.com/aws-bookstore-demo/master-fullstack-with-lambda-warmers.yaml"&gt;this deeplink&lt;/a&gt; creates an identical stack but with additional triggers to keep the Lamdba functions "warm."  Given that these triggers make the Lamdba functions run more frequently (every 10 minutes, on a schedule), this will add a small amount to the overall cost to run the application.  The benefit is a more responsive application even when the Lamdba functions are not being regularly called by user activity.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Known limitations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The application was written for demonstration purposes and not for production use.&lt;/li&gt;
&lt;li&gt;Orders are backed by DynamoDB, but no mechanism exists to recreate the best sellers list in the unlikely scenario of a Redis failure.&lt;/li&gt;
&lt;li&gt;Upon the first use of a Lambda function, cold start times in a VPC can be slow. Once the Lambda function has been warmed up, performance will improve.  See #6 in Considerations for demo purposes for more information.&lt;/li&gt;
&lt;li&gt;The application is not currently designed for for high availability. You can increase the availability of the application by configuring the Amazon Elasticsearch, Amazon Neptune, and Amazon ElastiCache clusters with multiple instances across multiple AZs.&lt;/li&gt;
&lt;li&gt;The application enables multiple users to sign into the application but the social graph is single user. As a result, different users will see the same social graph. Further, when new books are purchased, that state is not reflected in the social graph.&lt;/li&gt;
&lt;li&gt;There are some network errors observed on Firefox.  We are looking into this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Additions, forks, and contributions
&lt;/h2&gt;

&lt;p&gt;We are excited that you are interested in using AWS Bookstore Demo App!  This is a great place to start if you are just beginning with AWS and want to get a functional application up and running.  It is equally useful if you are looking for a sample full-stack application to fork off of and build your own custom application.  We encourage developer participation via contributions and suggested additions.  Of course you are welcome to create your own version!&lt;/p&gt;

&lt;p&gt;Please see the &lt;a href="//CONTRIBUTING.md"&gt;contributing guidelines&lt;/a&gt; for more information.&lt;/p&gt;

&lt;p&gt;For a more basic example of a full-stack web application, check out &lt;strong&gt;&lt;a href="https://github.com/awslabs/aws-full-stack-template"&gt;AWS Full-Stack Template&lt;/a&gt;&lt;/strong&gt; upon which AWS Bookstore Demo App was built.  As mentioned in the Overview section, AWS Full-Stack Template provides the foundational services, components, and plumbing needed to get a basic web application up and running. Users can build on top of AWS Full-Stack Template to create any application they envision, whether a travel booking tool, a blog, or another web app.  This AWS Bookstore Demo App is just one example of what you can create using AWS Full-Stack Template.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>devops</category>
      <category>awssample</category>
    </item>
    <item>
      <title>Cost-Effective AWS Architectures for Wordpress</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Wed, 19 Oct 2022 11:33:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/cost-effective-aws-architectures-for-wordpress-3d2e</link>
      <guid>https://dev.to/aws-builders/cost-effective-aws-architectures-for-wordpress-3d2e</guid>
      <description>&lt;h1&gt;
  
  
  Cost-Effective AWS Architectures for Wordpress (and other websites)
&lt;/h1&gt;

&lt;p&gt;‍&lt;br&gt;
Cost-Effective Architectures on AWS for WordPress. Step through Amazon's reference architecture to see what solution is right for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bt1x30vw0vrk2etn2td.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bt1x30vw0vrk2etn2td.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Wordpress Reference Architecture
&lt;/h2&gt;

&lt;p&gt;For a beginner or someone looking to run a small blog or e-commerce site, this diagram is crazy complicated. I started writing this post because I wanted to understand the reasons behind recommending an architecture like this by breaking it down into the individual components and services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is all that stuff for?&lt;/li&gt;
&lt;li&gt;What do I really need?&lt;/li&gt;
&lt;li&gt;How much is it going to cost me?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post works through an example that starts with a simple single-server deployment and adds one AWS service or feature at a time to build up to the complex diagram at the top. We will look into the benefit and the cost of each one so that you can make educated decisions on how to deploy your architecture. The estimated costs are for comparative purposes; every deployment will be different. Be sure to analyze your unique situation before deploying services that will cost you money.&lt;/p&gt;

&lt;p&gt;In reality, most implementations will not be the simplest nor the most complex, but somewhere in the middle.&lt;/p&gt;

&lt;p&gt;There are sites like wordpress.com, SquareSpace, and Wix, that will host your site for you. Amazon also offers LightSail, a managed solution for deploying WordPress (and other types of sites). One of these may be the cheapest and easiest solution, but there are also several reasons an individual or small business would want to run their own site. You may or may not want to use Wordpress, and I’m not going to debate whether on not that is the right idea. This post has information that is applicable to many different types of sites hosted on AWS.&lt;/p&gt;

&lt;p&gt;In its simplest form, we could run Wordpress on a single server like the diagram below. All we need is an EC2 instance running Wordpress, a simple VPC with an internet gateway, and DNS configured to point our domain name to this server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpxlzau0yym7tizwjucj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpxlzau0yym7tizwjucj.png" alt="Image description" width="391" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how does this simple diagram become the complicated one above? The short answer is that all that stuff provides scalability, availability, redundancy, speed, and configurability.&lt;/p&gt;

&lt;p&gt;If you are considering deploying or scaling up your site, I hope this post will help you decide the most cost-effective way to do it. What AWS Services can you take advantage of that will give you the best bang for your buck? And what services will leave you feeling taken advantage of with high costs and little return?&lt;/p&gt;

&lt;h4&gt;
  
  
  Wordpress Architecture
&lt;/h4&gt;

&lt;p&gt;Looking at the Wordpress application itself, it does four main things (architecturally speaking).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hosts static content (CSS, javascript, theme files)&lt;/li&gt;
&lt;li&gt;Hosts dynamic content (using PHP - see below)&lt;/li&gt;
&lt;li&gt;Stores database content (blog posts and page content)&lt;/li&gt;
&lt;li&gt;Stores uploaded files (i.e., pictures) on the file system&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the simple setup, all of these functions are provided by the single EC2 host. As the architecture grows, you will see that we can offload each of these functions to other AWS services, as shown in the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xngss4kobse910jnr3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xngss4kobse910jnr3c.png" alt="Image description" width="561" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, we need to know a little bit about PHP and how it works. Wordpress is written in PHP, which is a server-side scripting language. This means that as users visit the site, the PHP code loads data from the database and dynamically creates the HTML needed for our web browsers to show content. The web server has to compute the HTML code every time for every page visited by every user. For high traffic sites, this can be a lot of computation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions &amp;amp; Cost Estimations
&lt;/h2&gt;

&lt;p&gt;For this example, we need to make some assumptions. Let's say we are planning on building a personal blog with a healthy amount of traffic - around 10,000 visitors per month. This is a good amount of traffic for a personal blog or small business website, but still much less traffic than what some WordPress sites see. Costs for some of the services we add below are proportional to the traffic volume the site receives, while others are not dependent on traffic volume at all. I will point this out as we add the various services.&lt;/p&gt;

&lt;p&gt;EC2, RDS, and ElastiCache are all priced by instances and offer many different pricing options and payment plans. We will look at different instance types and sizes, but the price will always be based on the 1-year standard plan reserved instance. This will help us to compare apples to apples. However, you could potentially save more money with a different payment plan.&lt;/p&gt;

&lt;p&gt;Amazon also has different rates for some services based on region. For all the cost estimations in this post, I used the us-east-1 region prices, but this shouldn't affect things that much.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Simple Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The simple setup is Wordpress running on a single EC2 instance with your domain name pointed to your server's public IP address.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff13j7ypuzp1epde811cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff13j7ypuzp1epde811cr.png" alt="Image description" width="391" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use Amazon's Route 53 or your own registrar's DNS service. Route 53 costs 50 cents per month, and your registrar (i.e., GoDaddy) is usually free. The EC2 instance cost can vary significantly depending on what instance type and purchasing plan you choose. Setting up a simple, low traffic site shouldn't require anything more significant than a t3.small instance, which costs $8.92/mo on the annual plan.&lt;/p&gt;

&lt;p&gt;It's worth noting that we have not addressed any backup scheme. This instance could die anytime, and you could lose everything: the entire database, all themes, and configs, all blog posts, etc. You should employ a backup strategy with this setup, but that is a separate topic for another day.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Estimated Monthly Cost: $9.42&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Add CloudFront&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adding CloudFront is one of the best things you can do for your website. CloudFront is a content delivery network (CDN). There are several CDN providers, but CloudFront is the service offered by Amazon. CDNs copy the content on your server to locations around the world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flla43gbvqgcpq2p1bkuo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flla43gbvqgcpq2p1bkuo.png" alt="Image description" width="401" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup has some distinct advantages to you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your users will experience faster load times. With the edge locations likely closer to them than your primary server, they will see a speed boost.&lt;/li&gt;
&lt;li&gt;CloudFront serving your webpages means that it offloads compute utilization from your server, allowing you to possibly use a smaller instance size than you would otherwise need.Static content is cached and served to users directly from CloudFront's edge locations. (Your server still has to handle dynamic content and service requests for static content from the CloudFront servers as they update their cache.) CloudFront can also terminate SSL/TLS, saving more compute capacity on your server.&lt;/li&gt;
&lt;li&gt;The speed boost seen by users is also recognized by search engines and will help boost your site's SEO. Pages that load faster will be ranked better and have a better chance of showing up at the top of Google search results.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $2.75 (rough estimate based on 10,000 page views)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $12.17&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Note that CloudFront costs can significantly increase with increased traffic volume. If you have monetized your site, increased traffic should mean more revenue for you, and this won't be an issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Add RDS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Move your database off of the main EC2 instance to Amazon Aurora. Amazon Aurora is a MySQL compliant, fully managed database from Amazon. In doing this, we do need to create a new private subnet for the Aurora instance. Adding the subnet will not cost us any more, but will provide better security for your data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf4elye4qbx2ycom08px.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf4elye4qbx2ycom08px.png" alt="Image description" width="531" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using Aurora will give you the following benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;As a fully-managed database, you do not need to be worried about OS patches or software upgrades. When you manage the database on your main EC2 instance, you should always make sure these things are up-to-date to avoid security vulnerabilities. Aurora handles this for you.&lt;/li&gt;
&lt;li&gt;Simple backups with more features and flexibility than the simple setup.&lt;/li&gt;
&lt;li&gt;Better security with a private subnet. Having your database run on a server that is publicly addressable on the internet is generally considered poor security practice. You can better defend an instance running in a private subnet using access control lists and security groups.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $19.83 (Aurora Instances only; no backup storage)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $32.00&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Add Multiple Availability Zones w/ Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So far, although we have added a couple of new services, we still have a few spots that could be a single point of failure. If the one EC2 instance or the one Aurora DB instance goes down, the site will be unavailable. To guard against this, you can add redundancy with a second EC2 instance and a read-replica instance for Aurora. A good way to implement this is to use a separate availability zone (AZ). This does come with increased costs. We need to add two additional servers as well as an application load balancer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tarwi8wu2fywy5yyvfq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tarwi8wu2fywy5yyvfq.png" alt="Image description" width="491" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even with the significant cost jump, this solution has the following benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EC2 server redundancy. If one of the EC2 servers fails, the other seamlessly picks up all the traffic.&lt;/li&gt;
&lt;li&gt;Aurora DB failover. As a managed database service, Aurora DB will automatically handle any failures. Unlike the EC2 configuration, where both servers are essentially the same, the database servers operate in a master and read-replica configuration. This means that data can be read from either server, but only written to the master. If the read-replica is the one that fails, the master handles all the traffic (both read and write). If the master fails, Aurora recognizes this and converts the read-replica to function as the master.&lt;/li&gt;
&lt;li&gt;Reduced server load. The application load balancer distributes users across the EC2 instances. Likewise, with 2 DB instances, reads are distributed between the two, reducing the load on each.&lt;/li&gt;
&lt;li&gt;Availability Zone separation. Some failures that could make your EC2 instance or DB unavailable are due to software bugs, running out of disk space, or other similar issues. Simple redundancy can mitigate these types of failures. But what if there's an issue that impacts the entire data center? Amazon availability zones are designed to be independent and separate. An event (power outage, weather, etc.) affecting their data center for one AZ will not impact any other AZ. When you deploy your solution to multiple AZs, you benefit from this logical and physical separation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The reference architecture diagram shows the use of only two AZs, but you could use more to make your website even more robust. The template generatorfor this architecture assumes you are setting up three AZs. For costing this example, we will assume only two AZs. We need to add only one additional EC2 instance, one additional Aurora instance, and an application load balancer.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $45.18&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $77.18&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Add Auto-Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the previous architecture setup, we went from one to two EC2 instances. Adding auto-scaling allows you to spin up as many EC2 instances as you need to meet the current demand. For this relatively small website, two servers are plenty. The nice thing about auto-scaling is that you don't pay for it unless you use it. You can add auto-scaling to your site as a just in case measure to make sure your users always have access. If you are hosting a blog and a specific post goes viral, auto-scaling can provision servers to handle the spike in traffic, and will only create additional cost for you while you have the spike. Auto-scaling will then shutdown unneeded servers when the spike is over.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdelmigl0ze0wkb371xke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdelmigl0ze0wkb371xke.png" alt="Image description" width="641" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this example, we could set up the auto-scaling group to be a minimum of 2 (the instances we set up in the last step) and a max of 10. If you generally have a low amount of traffic, only the two servers will be on, but your site will still handle large spikes in demand.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $0.00 (unless there's a spike in traffic)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $77.18&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Add Private Subnets w/ NAT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Move the EC2 instances to a private subnet. In a previous step, We created private subnets for the Aurora DB servers. There, we discussed the security benefit of putting servers into private subnets. We can do this for the EC2 instances as well. In this configuration, the application load balancer is publicly addressable on the internet, while all our other servers (so far) are not. While this adds security, it comes with a catch: the EC2 instances can no longer reach the internet! While this isn't a problem for your users, you need internet access to update and maintain the servers. This includes security and OS patches or software upgrades. To give these servers internet access, we need to add a NAT gateway or NAT instance. The NAT gateway is easier to configure and requires less maintenance; therefore, I recommend the NAT Gateway over the NAT instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6cajyei9zaqc76n607b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6cajyei9zaqc76n607b.png" alt="Image description" width="751" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NAT gateways have a high cost compared to the other services we have implemented. The reference architecture adds a NAT to both subnets. Each will run about $40 per month if they run continuously. Here are a couple of things to know about NATs on AWS:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A NAT gateway in one subnet can be accessed by instances in other private subnets, even in other AZs&lt;/li&gt;
&lt;li&gt;You probably don't need to run NAT gateways all the time. Since these are used for outbound internet access only, and the use case for this is likely OS patches and software updates, you can probably turn these on only when performing those duties.
If you can, I recommend not running these instances all the time or only using one instance. If you only turn on these instances when you need them, they will cost you almost nothing per month. However, since we are looking at the reference architecture, which adds two of them (all the time), we will include their full-time use in the cost estimate, but you can do this cheaper.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $65.70&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $142.88&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Add Caching w/ Memcached&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Memcached is an open-source, high-performance, distributed memory caching system. The AWS ElastiCache service provides a choice between Redis and Memcached for in-memory caching. We don't need to discuss the differences here as the reference architecture specified Memcached, so we will use that. It does not make a big difference in the pricing. A cache is meant to speed things up, so is it worth it? The two main benefits you can expect to get from using the cache are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Faster response times for your users (and for search engine SEO).&lt;/li&gt;
&lt;li&gt;Less burden on your database instances that could allow you to select smaller instance types.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsx6rsmjk16d9ivkbdce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsx6rsmjk16d9ivkbdce.png" alt="Image description" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The cache in this architecture sits between the EC2 instances and the database servers. This helps speed up the PHP code executing on the EC2 instances by returning database queries faster. This brings the most benefit for sites that are mostly static (like blogs and small business websites). Stress-test analysis demonstrates that using a cache like Memcached can significantly improve the performance of the server.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $31.67&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $174.55&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Add EFS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Elastic File System (EFS) is a shared file system (similar to NFS) that allows each of your EC2 instances access to the same shared disk space. Wordpress stores uploaded files (like pictures in blog posts) to the local filesystem. With multiple EC2 instances, an uploaded file may be uploaded on one instance and need to be accessed by another instance. Shared disk space is essential to simplify the EC2 configuration, especially when using autoscaling. Using EFS, you can make the web tier completely stateless, so running and scaling EC2 servers is as easy as just turning them on and off. There is no need to synchronize files between nodes in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21it8quyo1e7qfv0abuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21it8quyo1e7qfv0abuq.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main benefit of using EFS is the stateless Web tier. When EC2 instances startup, they mount the EFS drive and are off to the races. Without EFS, you need to find a solution that synchronizes files between your EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $15.00&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $189.55&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Add Bastion Host&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bastion hosts provide you secure access to your EC2 instances that are in private subnets. While the private subnets offer better security by blocking some attack vectors from the bad guys, they also prevent you from directly accessing the instances over the internet. The Bastion host option could be included with #6 (adding private subnets) in this post, but I did want to discuss it separately.&lt;/p&gt;

&lt;p&gt;AWS recommends bastion hosts to provide better security, but they come with additional costs. Both in the actual AWS bill you receive and the additional administrative burden of more EC2 hosts (the whitepaper only shows one Bastion host, but puts it in an autoscaling group suggesting that you might need to add more to handle the increased load).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2oduh3rppvkevdfj8gj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2oduh3rppvkevdfj8gj.png" alt="Image description" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since only system administrators use these hosts to monitor and maintain the site, you would only need to set up auto-scaling for very busy sites. As an alternative, you could use public subnets combined with AWS EC2 Systems Manager, access control lists, security groups, CloudWatch, and CloudTrail, which may provide a cheaper way to achieve a similar security posture.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $8.83&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $198.38&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10.  Add S3 for Static Content&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Simple Storage Service (S3) is one of the more popular services on AWS. It provides a simple interface for storing files in the cloud in "buckets." You are probably already familiar with this service. In addition to simply storing files, it can be used to host static websites at a very low cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4e2wygm6uur9xj5ffxt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4e2wygm6uur9xj5ffxt.png" alt="Image description" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can turn on website hosting for a specific bucket and add that bucket as an origin for CloudFront. This allows you to run a static website without any server at all. In this example, we will offload the static content hosting from the EC2 Wordpress instance by moving the static site files (CSS, JavaScript) to S3. This can be done using a WordPress plain like W3 Total Cache. The main benefit of using S3 for static content is to lighten the load on the EC2 instance. Implementing this last step, the EC2 instance is now only charged with processing the PHP and rendering the HTML files for users.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Added Monthly Cost: $5.00&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Total Monthly Cost: $203.38&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There we have it. You can run a WordPress site using the Amazon reference architecture for about $200 per month. Remember, I am not recommending this and all of these services for everyone. I hope that this guide helps you choose the right services for you.&lt;/p&gt;

&lt;p&gt;Other topics to explore (maybe for another time) as you implement WordPress on AWS:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Backups - Depending on how you back up the database and file system content, you could add high costs.&lt;/li&gt;
&lt;li&gt;WAF - Web Application Firewall can provide additional security and defend against denial of service attacks. I am surprised WAF is not included in the reference architecture but may be something to consider.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks for reading! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/wordpress-best-practices-on-aws.pdf" rel="noopener noreferrer"&gt;References&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author:  Hoang Kha&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>wordpress</category>
      <category>architecture</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Run event-driven workflows with Amazon EKS and AWS Step Functions</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sun, 04 Sep 2022 14:36:35 +0000</pubDate>
      <link>https://dev.to/vanhoangkha14052000/run-event-driven-workflows-with-amazon-eks-and-aws-step-functions-4d12</link>
      <guid>https://dev.to/vanhoangkha14052000/run-event-driven-workflows-with-amazon-eks-and-aws-step-functions-4d12</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Event-driven computing is a common pattern in modern application development with microservices, which is a great fit for building resilient and scalable software in AWS. Event-driven computing needs to be push-based with event-driven applications that are run on-demand when an event triggers the functional workflow. Tools that help you minimize resource usage and reduce costs are essential. Instead of running systems continuously while you wait for an event to occur, event-driven applications are more efficient because they start when the event occurs and terminate when processing completes. Additionally, event-driven architectures with Smart Endpoints and Dump Pipes patterns further decouple services, which makes it easier to develop, scale, and maintain complex systems.&lt;/p&gt;

&lt;p&gt;This post demonstrates a proof-of-concept implementation that uses Kubernetes to execute code in response to an event. The workflow is powered by AWS Step Functions, which is a low-code, visual workflow service that helps you build distributed applications using &lt;strong&gt;AWS services. AWS Step Functions integrates with Amazon Elastic Kubernetes Service (Amazon EKS)&lt;/strong&gt;, making it easy to build event-driven workflows that orchestrate jobs running on Kubernetes with AWS services, such as &lt;strong&gt;AWS Lambda, Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS)&lt;/strong&gt;, with minimal code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Calling Amazon EKS with AWS Step Functions&lt;/strong&gt;&lt;br&gt;
AWS Step Functions integration with Amazon EKS creates a workflow that creates and deletes resources in your Amazon EKS cluster. You also benefit from built in error-handling that handles task failures or transient issues.&lt;/p&gt;

&lt;p&gt;AWS Step Functions provide eks:runJob service integration that allows you to run a job on your Amazon EKS cluster. The eks:runJob.sync variant allows you to wait for the job to complete and retrieve logs.&lt;/p&gt;

&lt;p&gt;We use AWS Step Functions to orchestrate an AWS Lambda function and a Map state ("Type": "Map") that runs a set of steps for each element of an input array. A Map state executes the same steps for multiple entries of an array with the state input.&lt;/p&gt;

&lt;p&gt;Solution overview&lt;br&gt;
The following diagram demonstrates the solution to run a sample event-driven workflow using Amazon EKS and AWS Step Functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uEHV869a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7eeol4gya9podod7oh8z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEHV869a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7eeol4gya9podod7oh8z.jpg" alt="Image description" width="880" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this demonstration, we use AWS Cloud Development Kit (AWS CDK) and first deploy a set of AWS CDK stacks to create and deploy necessary infrastructure, as shown in the previous diagram. AWS Step Functions invoke when an input file appears in the configured Amazon Simple Storage Service (Amazon S3) bucket.&lt;/p&gt;

&lt;p&gt;AWS Step Functions starts the following process when a new file appears in the Amazon S3 bucket:&lt;/p&gt;

&lt;p&gt;AWS Step Functions create a File Splitter Kubernetes job that runs in an Amazon EKS cluster. The job reads the input file from the Amazon S3 bucket and splits the large input file into smaller files, saving them to an Amazon Elastic File System (Amazon EFS) persistent volume.&lt;br&gt;
File Splitter Kubernetes job uses the unix split command to chunk the large files into smaller ones, with each file containing a maximum of 30,000 lines (which is configured using MAX_LINES_PER_BATCH environment variable).&lt;br&gt;
File splitter Kubernetes job saves the path of the split files in Amazon ElastiCache (Redis) that are used for tracking the overall progress of this job. The data in the Redis cache gets stored in the following format:&lt;br&gt;
Screenshots showing the format of data stored in Redis cache.&lt;/p&gt;

&lt;p&gt;AWS Step Functions orchestrate a Split-file-lambda AWS Lambda function that reads the Redis cache and returns an array of split file locations as the response.&lt;br&gt;
AWS Step Functions orchestrate a Map state that uses split files array as input and create a parallel Kubernetes jobs in your Amazon EKS cluster to process these split files in parallel, with a MaxConcurrency = 0. Each Kubernetes job receives one split file as input and performs the following:&lt;br&gt;
Read the individual split file from the Amazon EFS location.&lt;br&gt;
Process each row in the file and generate ConfirmationId for each OrderId field available for each row in the input file which inserts this information to orders Amazon DynamoDB table. All DynamoDB writes are batched to a maximum of 25 rows per request.&lt;br&gt;
Create a Comma-separated values (CSV) file in a Amazon EFS file location, with each row of the file containing both ConfirmationId and OrderId written in batch.&lt;br&gt;
Update Amazon ElastiCache by removing the split file (path) from Redis set using rdb.SRem command.&lt;br&gt;
Finally, merge the output split files in the Amazon EFS directory and upload them to the Amazon S3 bucket.&lt;br&gt;
It’s very important to settle on a right value for the maximum number of rows a split input file can contain. We set this value via MAX_LINES_PER_BATCH environment variable. Giving a smaller value will end up with too many split files creating many Kubernetes jobs, whereas setting a large value eaves too little scope for parallelism.&lt;br&gt;
Walkthrough&lt;br&gt;
Prerequisites&lt;br&gt;
You need the following to complete the steps in this post:&lt;/p&gt;

&lt;p&gt;AWS CLI version 2&lt;br&gt;
AWS CDK version 2.19.0 or later&lt;br&gt;
yarn version 1.22.0&lt;br&gt;
Node version 17.8.0 or later&lt;br&gt;
NPM version 8.5.0 or later&lt;br&gt;
Docker-cli&lt;br&gt;
Kubectl&lt;br&gt;
Git&lt;br&gt;
Let’s start by setting a few environment variables:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone https://github.com/aws-samples/containers-blog-maelstrom&lt;br&gt;
cd containers-blog-maelstrom/batch-processing-with-k8s/&lt;br&gt;
yarn install&lt;/code&gt;&lt;br&gt;
Bootstrap AWS Region&lt;br&gt;
The first step to any AWS CDK deployment is bootstrapping the environment. cdk bootstrap is a tool in the AWS CDK command-line interface (CLI) responsible for preparing the environment (i.e., a combination of AWS account and Region) with resources required by AWS CDK to perform deployments into that environment. If you already use AWS CDK in a Region, then you don’t need to repeat the bootstrapping process.&lt;/p&gt;

&lt;p&gt;Execute the following commands to bootstrap the AWS environment:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cdk bootstrap aws://$ACCOUNT_ID/$AWS_REGION&lt;/code&gt;&lt;br&gt;
Deploy the stack&lt;br&gt;
The AWS CDK code create one stack with the name file-batch-stack, which creates the following AWS Resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Amazon VPC and all related networking components (e.g., subnets).&lt;/li&gt;
&lt;li&gt;An Amazon EKS cluster&lt;/li&gt;
&lt;li&gt;An AWS Step Function with different states to orchestrate the event-driven batch workload process&lt;/li&gt;
&lt;li&gt;An Amazon S3 bucket to store the input file and merged output file&lt;/li&gt;
&lt;li&gt;An Amazon EvenBridge rule to trigger an AWS Step Function based on write events to Amazon S3 bucket&lt;/li&gt;
&lt;li&gt;An Amazon ElastiCache (Redis) to store split file details&lt;/li&gt;
&lt;li&gt;An AWS Lambda to create an array of split files&lt;/li&gt;
&lt;li&gt;An Amazon EFS file store to store temporary split files&lt;/li&gt;
&lt;li&gt;An Amazon DynamoDB Orders table to store the output details of processed order&lt;/li&gt;
&lt;li&gt;Run cdk list to see the list of the stacks to be created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;$ cdk list&lt;br&gt;
file-batch-stack&lt;/code&gt;&lt;br&gt;
Run the following command to start the deployment:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cdk deploy --require-approval never&lt;/code&gt;&lt;br&gt;
Please allow a few minutes for the deployment to complete. Once the deployment is successful, you will see the following output:&lt;/p&gt;

&lt;p&gt;✅ file-batch-stack&lt;br&gt;
Outputs:&lt;br&gt;
`file-batch-stack.KubernetesFileBatchConstructInputBucketName610D8598 = file-batch-stack-kubernetesfilebatchconstructinpu-&lt;br&gt;
file-batch-stack.KubernetesFileBatchConstructMultithreadedstepfuctionF3358A99 = KubernetesFileBatchConstructfilebatchmultithreaded0B80AF5A-&lt;br&gt;
file-batch-stack.KubernetesFileBatchConstructfilebatchEFSFileSystemId9139F216 = fs-&lt;br&gt;
file-batch-stack.KubernetesFileBatchConstructfilebatcheksclusterClusterName146E1BCB = KubernetesFileBatchConstructfilebatchekscluster6B334C7D-&lt;br&gt;
file-batch-stack.KubernetesFileBatchConstructfilebatcheksclusterConfigCommand3063A155 = aws eks update-kubeconfig —name KubernetesFileBatchConstructfilebatchekscluster6B334C7D- —region us-east-2 —role-arn arn:aws:iam:::role/file-batch-stack-KubernetesFileBatchConstructfileb-&lt;br&gt;
file-batch-stack.KubernetesFileBatchConstructfilebatcheksclusterGetTokenCommandAD6928E0 = aws eks get-token —cluster-name KubernetesFileBatchConstructfilebatchekscluster6B334C7D- —region us-east-2 —role-arn arn:aws:iam:::role/file-batch-stack-KubernetesFileBatchConstructfileb-&lt;br&gt;
file-batch-stack.KubernetesFileBatchConstructfilebatcheksclusterMastersRoleArn52BC348E = arn:aws:iam:::role/file-batch-stack-KubernetesFileBatchConstructfileb-&lt;/p&gt;

&lt;p&gt;Stack ARN:&lt;br&gt;
arn:aws:cloudformation:us-east-2::stack/file-batch-stack/`&lt;br&gt;
&lt;strong&gt;Start the workflow&lt;/strong&gt;&lt;br&gt;
To verify that the deployed solution is working, you can upload sample file test.csv under the payload folder to the input bucket. Run the following command from the root directory:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;S3_BUCKET_NAME=$(aws cloudformation describe-stacks \&lt;br&gt;
  --stack-name "file-batch-stack" \&lt;br&gt;
  --region $AWS_REGION \&lt;br&gt;
  --query 'Stacks[0].Outputs[?starts_with(OutputKey,&lt;/code&gt;KubernetesFileBatchConstructInputBucketName&lt;code&gt;)].OutputValue' \&lt;br&gt;
  --output text)&lt;br&gt;
echo $S3_BUCKET_NAME&lt;br&gt;
aws s3api put-object \&lt;br&gt;
  --bucket $S3_BUCKET_NAME \&lt;br&gt;
  --key test.csv \&lt;br&gt;
  --body payload/test.csv&lt;/code&gt;&lt;br&gt;
The following image shows a line from the input file:&lt;/p&gt;

&lt;p&gt;Screenshot showing the format of data in the input file.&lt;/p&gt;

&lt;p&gt;When a new file is uploaded to the Amazon S3 bucket, the AWS Step Function state machine is triggered using an Amazon EventBridge Rule. Navigate to the AWS Managed Console and select the state machine created by the AWS CDK (the CDK output includes the name).&lt;/p&gt;

&lt;p&gt;The AWS Step Function execution looks like the execution details, as described in the Solution overview section, and shown in the following diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wq_voZ7m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5f22o51pg1o8j9ullh4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wq_voZ7m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5f22o51pg1o8j9ullh4.jpg" alt="Image description" width="433" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the workflow completes successfully, you can download the response file from the Amazon S3 bucket.&lt;/p&gt;

&lt;p&gt;Run following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 cp s3://$S3_BUCKET_NAME/test.csv_Output .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aQ6D11CM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zm9nr2eowan47b2g1ugr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aQ6D11CM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zm9nr2eowan47b2g1ugr.jpg" alt="Image description" width="700" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cleanup&lt;/strong&gt;&lt;br&gt;
You continue to incur cost until you delete the infrastructure that you created for this post. Use the following command to clean up the resources created for this post:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 rb s3://$S3_BUCKET_NAME --force&lt;br&gt;
aws dynamodb delete-table --table-name Order&lt;br&gt;
cdk destroy&lt;/code&gt;&lt;br&gt;
AWS CDK asks you:&lt;/p&gt;

&lt;p&gt;Are you sure you want to delete: file-batch-stack (y/n)?&lt;/p&gt;

&lt;p&gt;Enter y to delete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This post showed how to run event-driven workflows at scale using AWS Step Functions on Amazon EKS and AWS Lambda. We provided you with AWS CDK code to create the cloud infrastructure, Kubernetes resources, and the application within the same codebase. Whenever you upload a file to the Amazon S3 bucket, the event triggers a Kubernetes job.&lt;/p&gt;

&lt;p&gt;You can follow the details in this post to build your own serverless event-driven workflows that run jobs in Amazon EKS clusters.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>stepfunction</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Containers in the Cloud</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:49:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/containers-in-the-cloud-g8o</link>
      <guid>https://dev.to/aws-builders/containers-in-the-cloud-g8o</guid>
      <description>&lt;h4&gt;
  
  
  ECS - Elastic Container service
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;ECS is container orchestration service&lt;/li&gt;
&lt;li&gt;ECS helps to run Docker containers and EC2 machines&lt;/li&gt;
&lt;li&gt;ECS is made of:

&lt;ul&gt;
&lt;li&gt;ECS EC2: running ECS tasks an user-provisioned EC2 instances&lt;/li&gt;
&lt;li&gt;Fargate: running ECS tasks on AWS provisioned compute instances (serverless)&lt;/li&gt;
&lt;li&gt;EKS: running ECS on AWS powered Kubernetes&lt;/li&gt;
&lt;li&gt;ECR: Docker Container Registry hosted on AWS&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;ECS and Docker are very popular for micro-services&lt;/li&gt;
&lt;li&gt;IAM security and roles are at the task level&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Concepts
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;ECS cluster: set of EC2 instances&lt;/li&gt;
&lt;li&gt;ECS service: application definitions running on ECS cluster&lt;/li&gt;
&lt;li&gt;ECS tasks + definition: containers running to create the application&lt;/li&gt;
&lt;li&gt;ECS IAM roles: roles assigned to ECS tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ECS - ALB integration
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Application Load Balancer has a direct integration feature with ECS called port mapping&lt;/li&gt;
&lt;li&gt;This allows us to run multiple instances of the same application on the same EC2 machine&lt;/li&gt;
&lt;li&gt;Use cases:

&lt;ul&gt;
&lt;li&gt;Increase resiliency even if the application is running on one EC2&lt;/li&gt;
&lt;li&gt;Maximize utilization of CPU cores&lt;/li&gt;
&lt;li&gt;Ability to perform rolling updates without impacting application uptime&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ECS Setup and Config file
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Run an EC2 instance, install the ECS agent with ECS config file or use ECS-ready Linux AMI (still need to modify the config file)&lt;/li&gt;
&lt;li&gt;ECS Config file is at &lt;code&gt;/etc/ecs/ecs.config&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Config settings:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ECS_CLUSTER&lt;/code&gt;: to which cluster belongs the EC2 instance&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ECS_ENGINE_AUTH_DATA&lt;/code&gt;: authenticate to private registries&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ECS_AVAILABLE_LOGGING_DRIVERS&lt;/code&gt;: used for enabling CloudWatch logging&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ECS_ENABLE_TASK_IAM_ROLE&lt;/code&gt;: enable IAM roles for an ECS tasks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ECS - IAM Task Roles
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The EC2 instance running the containers should have an IAM role allowing it to access the ECS service for the ECS agent&lt;/li&gt;
&lt;li&gt;Each task inherits EC2 permissions&lt;/li&gt;
&lt;li&gt;ECS IAM task role: role dedicated to each task separately&lt;/li&gt;
&lt;li&gt;Define a tas role: we can use the &lt;code&gt;taskRoleArn&lt;/code&gt; parameter in the task definition&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Fargate
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;When launching an ECS cluster, we have to create our EC2 instances, which means basically we are managing the underlying infrastructure&lt;/li&gt;
&lt;li&gt;With Fargate, this is eliminated since this AWS service is serverless&lt;/li&gt;
&lt;li&gt;We have to provide task definitions and AWS will run the container for us&lt;/li&gt;
&lt;li&gt;To scale we just have to increase the task number&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ECR - Elastic Container Registry
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Store, manage and deploy container in AWS&lt;/li&gt;
&lt;li&gt;Fully integrated with IAM and ECS&lt;/li&gt;
&lt;li&gt;Data is sent over HTTPS and encrypted at rest&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Amazon EKS
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EKS = Elastic Kubernetes Service&lt;/li&gt;
&lt;li&gt;It is a way to launch managed Kubernetes clusters on AWS&lt;/li&gt;
&lt;li&gt;Kubernetes is an open-source system for automatic deployment, scaling and management of containerized applications&lt;/li&gt;
&lt;li&gt;It is an alternative to ECS having a different API&lt;/li&gt;
&lt;li&gt;EKS supports EC2 if we want to deploy worker nodes or Fargate to deploy serverless containers&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>containerapps</category>
      <category>eks</category>
      <category>fargate</category>
    </item>
    <item>
      <title>AWS RDS - Relational Database Service</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:37:15 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-rds-relational-database-service-58le</link>
      <guid>https://dev.to/aws-builders/aws-rds-relational-database-service-58le</guid>
      <description>&lt;h4&gt;
  
  
  AWS RDS - Relational Database Service
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is a managed database service for relational databases&lt;/li&gt;
&lt;li&gt;It allows us to create databases in the cloud that are managed by AWS&lt;/li&gt;
&lt;li&gt;RDS offerings provided by AWS:

&lt;ul&gt;
&lt;li&gt;PostreSQL&lt;/li&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;li&gt;MariaDB&lt;/li&gt;
&lt;li&gt;Oracle&lt;/li&gt;
&lt;li&gt;Microsoft SQL Server&lt;/li&gt;
&lt;li&gt;Aurora&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Advantages of AWS RDS over deploying an relational database on EC2:

&lt;ul&gt;
&lt;li&gt;RDS is a managed service, meaning:

&lt;ul&gt;
&lt;li&gt;Automated provisioning, OS patching&lt;/li&gt;
&lt;li&gt;Continuous backups and restore to specific timestamp (Point in Time Restore)&lt;/li&gt;
&lt;li&gt;Monitoring dashboards&lt;/li&gt;
&lt;li&gt;Read replicas&lt;/li&gt;
&lt;li&gt;Multi AZ setup&lt;/li&gt;
&lt;li&gt;Maintenance windows for upgrades&lt;/li&gt;
&lt;li&gt;Scaling capability (vertical and horizontal)&lt;/li&gt;
&lt;li&gt;Storage backed by EBS (GP2 or IO)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Disadvantages:

&lt;ul&gt;
&lt;li&gt;No SSH into the instance which hosts the database&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  RDS Backups
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Backups are automatically enabled in RDS&lt;/li&gt;
&lt;li&gt;AWS RDS provides automated backups:

&lt;ul&gt;
&lt;li&gt;Daily fill backup of the database (during the maintenance window)&lt;/li&gt;
&lt;li&gt;Transaction logs are backed-up by RDS every 5 minutes which provides the ability to do point in time restores&lt;/li&gt;
&lt;li&gt;There is a 7 day retention for the backups which can be increased to 35 days&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;DB Snapshots:

&lt;ul&gt;
&lt;li&gt;There are manually triggered backups by the users&lt;/li&gt;
&lt;li&gt;Retention can be as long as the user wants&lt;/li&gt;
&lt;li&gt;Helpful for retaining the state of the database for longer period of time&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  RDS Read Replicas
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Read replicas helps to scale the read operations&lt;/li&gt;
&lt;li&gt;We can create up to 5 read replicas&lt;/li&gt;
&lt;li&gt;These replicas can be within AZ, cross AZ or in different regions&lt;/li&gt;
&lt;li&gt;The data between the main database and the read replicas is replicated &lt;strong&gt;asynchronously&lt;/strong&gt; =&amp;gt; reads are eventually consistent&lt;/li&gt;
&lt;li&gt;Read replicas can be promoted into their own database&lt;/li&gt;
&lt;li&gt;Use case for read replicas:

&lt;ul&gt;
&lt;li&gt;Production database is up and running taking on normal load&lt;/li&gt;
&lt;li&gt;There is a new feature for running some reporting for analytics which may cause slow downs and may overload the database&lt;/li&gt;
&lt;li&gt;To fix this we can create read replicas for reporting&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Read replicas are used for SELECT operations (not INSERT, UPDATE, DELETE)&lt;/li&gt;
&lt;li&gt;Network cost for read replicas:

&lt;ul&gt;
&lt;li&gt;In AWS there is network cost if data goes from one AZ to another&lt;/li&gt;
&lt;li&gt;In case of cross AZ replication, additional costs may incur because of network traffic&lt;/li&gt;
&lt;li&gt;To reduce costs, we could have the read replicas in the same AZ&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  RDS Multi AZ (Disaster Recovery)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;RDS Multi AZ replication is done using &lt;strong&gt;synchronous&lt;/strong&gt; replication&lt;/li&gt;
&lt;li&gt;In case of multi AZ configuration we get one DNS name&lt;/li&gt;
&lt;li&gt;In case of the main database goes down, the traffic is automatically re-routed to the failover database&lt;/li&gt;
&lt;li&gt;Multi AZ is not used for scaling&lt;/li&gt;
&lt;li&gt;The read replicas can be set up as Multi AZ for Disaster Recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  RDS Security
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Encryption
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;AWS RDS provides rest encryption: possibility to encrypt the master and read replicas with AWS KMS - AES-256 encryption

&lt;ul&gt;
&lt;li&gt;Encryption has to be defined at the launch time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;If the master is not encrypted, the read replicas cannot be encrypted&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Transparent Data Encryption (TDE) is available for Oracle and SQL Server&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;In-flight encryption: uses SSL certificates to encrypt data from client to RDS in flight

&lt;ul&gt;
&lt;li&gt;It is required SSL a trust certificate when connecting to database&lt;/li&gt;
&lt;li&gt;To enforce SSL:

&lt;ul&gt;
&lt;li&gt;PostgeSQL: rds.force_ssl=1 in the AWS RDS Console (Parameter Groups)&lt;/li&gt;
&lt;li&gt;MySQL: &lt;code&gt;GRANT USAGE ON *.* To 'user'@'%' REQUIRE SSL;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Encrypting RDS backups:

&lt;ul&gt;
&lt;li&gt;Snapshots of un-encrypted RDS databases are un-encrypted&lt;/li&gt;
&lt;li&gt;Snapshots of encrypted RDS databases are encrypted&lt;/li&gt;
&lt;li&gt;We can copy an un-encrypted snapshot into an encrypted one&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Encrypt an un-encrypted RDS database:

&lt;ul&gt;
&lt;li&gt;Create a snapshot&lt;/li&gt;
&lt;li&gt;Copy the snapshot and enable encryption for the snapshot&lt;/li&gt;
&lt;li&gt;Restore the database from the encrypted snapshot&lt;/li&gt;
&lt;li&gt;Migrate application from the old database to the new one and delete the old database&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Network Security and IAM
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Network security:

&lt;ul&gt;
&lt;li&gt;RDS databases are usually deployed within a private subnet&lt;/li&gt;
&lt;li&gt;RDS security works by leveraging security groups (similar to EC2), they control who can communicate with the database instance&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Access management:

&lt;ul&gt;
&lt;li&gt;There are IAM policies which help control who can manage an AWS RDS database (through the RDS API)&lt;/li&gt;
&lt;li&gt;Traditional username/password can be used to login into the database&lt;/li&gt;
&lt;li&gt;IAM-based authentication can be used to login into MySQL and PostgreSQL &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;IAM authentication:

&lt;ul&gt;
&lt;li&gt;IAM database authentication works with MySQL and PostgreSQL&lt;/li&gt;
&lt;li&gt;We don't need a password to authenticate, just an authentication token obtained through IAM and RDS API calls&lt;/li&gt;
&lt;li&gt;The token has a lifetime of 15 minutes&lt;/li&gt;
&lt;li&gt;Benefits:

&lt;ul&gt;
&lt;li&gt;Network in/out must be encrypted using SSL&lt;/li&gt;
&lt;li&gt;IAM is used to centrally manage users instead of DB credentials&lt;/li&gt;
&lt;li&gt;We can manage IAM roles and EC2 instance profiles for easy integration&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security Summary
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Encryption at rest:

&lt;ul&gt;
&lt;li&gt;It is done only when the database is created&lt;/li&gt;
&lt;li&gt;To encrypt an existing database, we have create a snapshot, copy it as encrypted, and create an encrypted database from the snapshot&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Our responsibility:

&lt;ul&gt;
&lt;li&gt;Check the ports/IP/security groups inbound rules&lt;/li&gt;
&lt;li&gt;Take care of database user creation and permissions or manage them through IAM&lt;/li&gt;
&lt;li&gt;Create a database with or without public access&lt;/li&gt;
&lt;li&gt;Ensure parameter groups or DB is configured to only allow SSL connections&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;AWS responsibility:

&lt;ul&gt;
&lt;li&gt;DB patching&lt;/li&gt;
&lt;li&gt;Underlying OS patching and updates&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>database</category>
    </item>
    <item>
      <title>EFS - Elastic File System</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:36:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/efs-elastic-file-system-1j73</link>
      <guid>https://dev.to/aws-builders/efs-elastic-file-system-1j73</guid>
      <description>&lt;h4&gt;
  
  
  EFS - Elastic File System
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EFS is a managed NFS (network file system) that can be mounted on many EC2 instances&lt;/li&gt;
&lt;li&gt;EFS works with EC2 instances across multi AZs&lt;/li&gt;
&lt;li&gt;EFS is highly available, scalable, but also more expensive (3x GP2) than EBS&lt;/li&gt;
&lt;li&gt;EFS is pay per use&lt;/li&gt;
&lt;li&gt;Use cases: content management, web service, data sharing, Wordpress&lt;/li&gt;
&lt;li&gt;Uses NFSv4.1 protocol&lt;/li&gt;
&lt;li&gt;We can use security groups to control access to EFS volumes&lt;/li&gt;
&lt;li&gt;EFS is only compatible with Linux based AMIs (not Windows)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EFS Performance and Storage Classes
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EFS Scale

&lt;ul&gt;
&lt;li&gt;Thousands of concurrent NFS clients, 10 GB+ per second throughput&lt;/li&gt;
&lt;li&gt;It can grow to petabyte scale NFS automatically&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Performance mode (can be set at EFS creation time)

&lt;ul&gt;
&lt;li&gt;General purpose (default): recommended for latency-sensitive use cases: web server, CMS, etc.&lt;/li&gt;
&lt;li&gt;Max I/O - higher latency, throughput, highly parallel, recommended for big data, media processing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Storage tiers: lifecycle manage feature

&lt;ul&gt;
&lt;li&gt;Standard: for frequently accessed files&lt;/li&gt;
&lt;li&gt;Infrequent access (EFS-IA): there is a cost to retrieve files, lower price per storage&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS vs EFS
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EBS volumes&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Can be attached to only one instance at a time&lt;/li&gt;
&lt;li&gt;Are locked at the AZ level&lt;/li&gt;
&lt;li&gt;GP2: IO increases if the disk size increases&lt;/li&gt;
&lt;li&gt;IO1: can increase IO independently&lt;/li&gt;
&lt;li&gt;To migrate an EBS across AZ:

&lt;ul&gt;
&lt;li&gt;Take a snapshot&lt;/li&gt;
&lt;li&gt;Restore the volume from the snapshot&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Root EBS volumes get terminated by default in the EC2 instance is terminated (this feature can be disabled)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EFS&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Can be mounted to multiple instances across AZs via EFS mount targets&lt;/li&gt;
&lt;li&gt;Available only for Linux instances&lt;/li&gt;
&lt;li&gt;EFS has a higher price point than EBS&lt;/li&gt;
&lt;li&gt;EFS is pay per second, we can leverage EFS-IA for cost saving&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>efs</category>
      <category>aws</category>
    </item>
    <item>
      <title>EBS - Elastic Block Storage</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:35:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/ebs-elastic-block-storage-20fa</link>
      <guid>https://dev.to/aws-builders/ebs-elastic-block-storage-20fa</guid>
      <description>&lt;h4&gt;
  
  
  EBS - Elastic Block Storage
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;An EC2 instances loses its root volume when it is manually terminated&lt;/li&gt;
&lt;li&gt;Unexpected terminations may happen&lt;/li&gt;
&lt;li&gt;Sometimes we need a way to store instance data somewhere&lt;/li&gt;
&lt;li&gt;An EBS (Elastic Block Store) Volume is a network drive which can be attached to an EC2 instance&lt;/li&gt;
&lt;li&gt;It allows for the instances to persist date&lt;/li&gt;
&lt;li&gt;EBS is a network drive:

&lt;ul&gt;
&lt;li&gt;It uses the network to communicate with the instance, which can introduce latency&lt;/li&gt;
&lt;li&gt;It can be detached from an EC2 and attached ot another&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;EBS volumes are locked to an AZ&lt;/li&gt;
&lt;li&gt;To move a volume across, we need to create a snapshot&lt;/li&gt;
&lt;li&gt;EBS volumes have a provisioned capacity (size in GB and IOPS)&lt;/li&gt;
&lt;li&gt;Billing is done for all provisioned capacity even if the capacity is not fully used&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS Volume Types
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EBS Volumes can have 4 types:

&lt;ul&gt;
&lt;li&gt;GP2 (SSD): general purpose SSD volume that balances price and performance&lt;/li&gt;
&lt;li&gt;IO1 (SSD): highest performance SSD volume for mission-critical low-latency or high-throughput workloads&lt;/li&gt;
&lt;li&gt;ST1 (HDD): low cost HDD volume designed for frequent access, throughput-intensive workloads&lt;/li&gt;
&lt;li&gt;SC1 (HDD): for less frequently accessed data&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;EBS Volumes are characterized in Size | Throughput | IOPS (I/O Operations per second)&lt;/li&gt;
&lt;li&gt;Only GP2 and IO1 can be used as boot volumes&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS Volume Types - Deep Dive
&lt;/h4&gt;

&lt;h5&gt;
  
  
  GP2
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Recommended for most workloads&lt;/li&gt;
&lt;li&gt;Can be system boot volume&lt;/li&gt;
&lt;li&gt;Can be used for virtual desktops, low-latency applications, development and test environments&lt;/li&gt;
&lt;li&gt;Size can range from 1GiB to 16TiB&lt;/li&gt;
&lt;li&gt;Small GP2 volumes can burst IOPS to 3000&lt;/li&gt;
&lt;li&gt;Max IOPS is 16000&lt;/li&gt;
&lt;li&gt;We get 3 IOPS per GiB, which means at 5334GiB we are the max IOPS size&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  IO1
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Recommended for business critical applications which require sustained IOPS performance, or more than 16000 IOPS per volume&lt;/li&gt;
&lt;li&gt;Recommended for large database workloads&lt;/li&gt;
&lt;li&gt;Size can be between 4Gib and 16 TiB&lt;/li&gt;
&lt;li&gt;The maximum ratio of provisioned IOPS per requested volume size is 50:1&lt;/li&gt;
&lt;li&gt;Max IOPS for IO1/2 volumes is 64000 IOPS for instances built on Nitro System and 32000 for other type of instances&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  ST1
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Recommended for streaming workloads&lt;/li&gt;
&lt;li&gt;It has fast throughput at low price&lt;/li&gt;
&lt;li&gt;Can not be a root volume&lt;/li&gt;
&lt;li&gt;Size can be between 500Gib and 16TiB&lt;/li&gt;
&lt;li&gt;Max IOPS is 500&lt;/li&gt;
&lt;li&gt;Max throughput 500 MiB/Sec&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  SC1
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Throughput oriented storage for large volumes of data which is infrequently accessed&lt;/li&gt;
&lt;li&gt;Can not be a boot volume&lt;/li&gt;
&lt;li&gt;Max IOPS is 250, max throughput 250MiB/sec&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Limits
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;SSD, General Purpose – gp2
– Volume size 1 GiB – 16 TiB
– Max IOPS/volume 16,000&lt;/li&gt;
&lt;li&gt;SSD, Provisioned IOPS – i01
– Volume size 4 GiB – 16 TiB
– Max IOPS/volume 64,000
– HDD, Throughput Optimized – (st1)
– Volume size 500 GiB – 16 TiB 

&lt;ul&gt;
&lt;li&gt;Throughput measured in MB/s, and includes the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;HDD, Cold – (sc1)
– Volume size 500 GiB – 16 TiB.

&lt;ul&gt;
&lt;li&gt;Lowest cost storage – cannot be a boot volume
– These volumes can burst up to 80 MB/s per TB, with a baseline throughput of 12 MB/s per TB and a maximum throughput of 250 MB/s per volume: HDD, Magnetic – Standard – cheap, infrequently accessed storage – lowest cost storage that can be a boot volume&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS Snapshots
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Snapshots are incremental - only the changed blocks are backed up&lt;/li&gt;
&lt;li&gt;EBS backups use IO and we should not run them while the application is handling a lot of traffic&lt;/li&gt;
&lt;li&gt;Snapshots are stored in S3 (we are not able to see them)&lt;/li&gt;
&lt;li&gt;It is not necessary to detach the volume to do a snapshot, but it is recommended&lt;/li&gt;
&lt;li&gt;An account can have up to 100k snapshots&lt;/li&gt;
&lt;li&gt;We can make an image (AMI) out of a snapshot, snapshots can be copied across AZs&lt;/li&gt;
&lt;li&gt;EBS volumes restored from snapshots need to be pre-warmed (using &lt;code&gt;fio&lt;/code&gt; or &lt;code&gt;dd&lt;/code&gt; commands to read the entire volume)&lt;/li&gt;
&lt;li&gt;Snapshots can be automated using &lt;strong&gt;Amazon Data Lifecycle Manager&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS Migrations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EBS Volumes are locked to a specific AZ&lt;/li&gt;
&lt;li&gt;To migrate it to a different AZ (or region) we have to do the following:

&lt;ul&gt;
&lt;li&gt;Create a snapshot from the volume&lt;/li&gt;
&lt;li&gt;(optional) Copy the volume to a different region&lt;/li&gt;
&lt;li&gt;Create a volume from the snapshot in the AZ of choice&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS Encryption
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;When we create an encrypted EBS volume, we get the following:

&lt;ul&gt;
&lt;li&gt;Data at rest is encrypted inside the volume&lt;/li&gt;
&lt;li&gt;All the data in flight moving between the instance and the volume is encrypted&lt;/li&gt;
&lt;li&gt;All snapshots are encrypted&lt;/li&gt;
&lt;li&gt;All volumes created from the snapshots will be encrypted&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Encryption and decryption are handled transparently by EBS system&lt;/li&gt;
&lt;li&gt;Encryption may have a minimal impact on latency&lt;/li&gt;
&lt;li&gt;EBS Encryption leverages keys from KMS (encryption algorithm is AES-256)&lt;/li&gt;
&lt;li&gt;Copying an unencrypted snapshot allows encryption&lt;/li&gt;
&lt;li&gt;Encrypt an unencrypted EBS volume:

&lt;ol&gt;
&lt;li&gt;Create an EBS snapshot from the volume&lt;/li&gt;
&lt;li&gt;Copy the snapshot an enable encryption on the process&lt;/li&gt;
&lt;li&gt;Create a new EBS volume from the snapshot (the volume will be encrypted)&lt;/li&gt;
&lt;li&gt;Attach the encrypted volume to an instance&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS vs Instance Store
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Some instances do not come with a root EBS volume&lt;/li&gt;
&lt;li&gt;Instead, they come with an &lt;strong&gt;instance store&lt;/strong&gt; (ephemeral storage)&lt;/li&gt;
&lt;li&gt;An instance store is  a physically attached to the machine (EBS is a network drive)&lt;/li&gt;
&lt;li&gt;Pros of instance stores:

&lt;ul&gt;
&lt;li&gt;Better I/O performance&lt;/li&gt;
&lt;li&gt;Good for buffer, cache, scratch data, temporary content&lt;/li&gt;
&lt;li&gt;Data survives a reboot&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Cons of instance stores:

&lt;ul&gt;
&lt;li&gt;On stop or termination of the instance, the data from the instance store is lost&lt;/li&gt;
&lt;li&gt;An instance store can not be resized&lt;/li&gt;
&lt;li&gt;Backups of an instance store must be done manually by the user&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;An instance store is:

&lt;ul&gt;
&lt;li&gt;A physical disk form the physical server where the EC2 instance runs&lt;/li&gt;
&lt;li&gt;Very Hight IOPS disk&lt;/li&gt;
&lt;li&gt;A disk up to 7.5 TiB, stripped to reach 30 TiB&lt;/li&gt;
&lt;li&gt;A block storage (just like EBS)&lt;/li&gt;
&lt;li&gt;Can not be increased in size&lt;/li&gt;
&lt;li&gt;An ephemeral storage (risk of data loss if hardware fails)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EBS RAID Options
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EBS is already redundant storage (replicated within an AZ)&lt;/li&gt;
&lt;li&gt;If we want to increase IOPS of if we want to mirror an EBS volume we can mount EBS volumes in parallel RAID settings&lt;/li&gt;
&lt;li&gt;RAID is possible as long as the OS supports it&lt;/li&gt;
&lt;li&gt;Some RAID options are:

&lt;ul&gt;
&lt;li&gt;RAID 0&lt;/li&gt;
&lt;li&gt;RAID 1&lt;/li&gt;
&lt;li&gt;RAID 5, RAID 6 are not recommended for EBS&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAID 0&lt;/strong&gt;: used for increased performance. We can combine to or more volumes and what we get is the total number of disk space and I/O

&lt;ul&gt;
&lt;li&gt;If one of the disks fail, all the logical data is lost&lt;/li&gt;
&lt;li&gt;Use cases:

&lt;ul&gt;
&lt;li&gt;Applications with lot of IOPS but without the need for fault-tolerance&lt;/li&gt;
&lt;li&gt;A database with builtin replication&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAID 1&lt;/strong&gt;: used for increased fault-tolerance. Mirroring a volume to another.

&lt;ul&gt;
&lt;li&gt;If one the disks fails, the logical volume will still work&lt;/li&gt;
&lt;li&gt;We have to send the data to two EBS volumes at the same time&lt;/li&gt;
&lt;li&gt;Use cases:

&lt;ul&gt;
&lt;li&gt;Applications that need increased fault-tolerance&lt;/li&gt;
&lt;li&gt;Applications which need to service disks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>ebs</category>
    </item>
    <item>
      <title>Auto Scaling Groups</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:33:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/auto-scaling-groups-4690</link>
      <guid>https://dev.to/aws-builders/auto-scaling-groups-4690</guid>
      <description>&lt;h4&gt;
  
  
  Auto Scaling Groups
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Applications may encounter different amount of load depending on their usage span&lt;/li&gt;
&lt;li&gt;In cloud we can create and get rid of resources (servers) quickly&lt;/li&gt;
&lt;li&gt;The goal of an Auto Scaling Group (ASG) is to:

&lt;ul&gt;
&lt;li&gt;Scale out (add more EC2 instances) to match the increased load&lt;/li&gt;
&lt;li&gt;Scale in (remote EC2 instances) to match a decreased load&lt;/li&gt;
&lt;li&gt;Ensure we have a minimum and a maximum number of machines running&lt;/li&gt;
&lt;li&gt;Automatically register new instances to a load balancer&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ASG Attributes
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Launch configuration - consists of:

&lt;ul&gt;
&lt;li&gt;AMI + Instance Type&lt;/li&gt;
&lt;li&gt;EC2 User Data&lt;/li&gt;
&lt;li&gt;EBS Volumes&lt;/li&gt;
&lt;li&gt;Security Groups&lt;/li&gt;
&lt;li&gt;SSH Key Pair&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Min size, max size, initial capacity&lt;/li&gt;
&lt;li&gt;Network + subnets information&lt;/li&gt;
&lt;li&gt;Load balancer information&lt;/li&gt;
&lt;li&gt;Scaling policies - what will trigger a scale out/scale in&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Auto Scaling Alarms
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is possible to scale an ASG based on CloudWatch alarms&lt;/li&gt;
&lt;li&gt;An alarm monitors a metric (such as Average CPU)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Metrics are computed for the overall ASG instances&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Based on alarms we can create:

&lt;ul&gt;
&lt;li&gt;Scale-out policies&lt;/li&gt;
&lt;li&gt;Scale-in policies&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Auto Scaling New Rules
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is possible to define "better" auto scaling rules managed directly by EC2 instances, for example:

&lt;ul&gt;
&lt;li&gt;Target Average CPU Usage&lt;/li&gt;
&lt;li&gt;Number of requests on teh ELB per instance&lt;/li&gt;
&lt;li&gt;Average Network In/Out&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;These rules are easier to set up and to reason about then the previous ones&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Auto Scaling Based on Custom Metrics
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We can scale based on a custom metric (ex: number of connected users to the application)&lt;/li&gt;
&lt;li&gt;In order to do this we have to:

&lt;ol&gt;
&lt;li&gt;Send a custom metric request to CloudWatch (PutMetric API)&lt;/li&gt;
&lt;li&gt;Create a CloudWatch alarm to react to values of the metric&lt;/li&gt;
&lt;li&gt;Use the CloudWatch alarm as a scaling policy for ASG&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ASG Summary
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Scaling policies can be on CPU, Network, etc. and can even be based on custom metrics or based on a schedule&lt;/li&gt;
&lt;li&gt;ASGs can use launch configurations or launch templates (newer version)

&lt;ul&gt;
&lt;li&gt;Launch configurations allow to specify one instance type&lt;/li&gt;
&lt;li&gt;Launch templates allow to use a spot fleet of instances&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;To update an ASG, we must provide a new launch configuration/template. The underlying EC2 instances will be replaced over time&lt;/li&gt;
&lt;li&gt;IAM roles attached to an ASG will be assigned to the launched EC2 instances&lt;/li&gt;
&lt;li&gt;ASGs are free. We pay for the underlying resources being launched (EC2 instances, attached EBS volumes, etc.)&lt;/li&gt;
&lt;li&gt;Having instances under an ASG means that if they get terminated for any reason, the ASG will automatically create new ones as a replacement&lt;/li&gt;
&lt;li&gt;ASG can terminate instances marked as unhealthy by a load balancer and obviously replace them&lt;/li&gt;
&lt;li&gt;Health checks - we can have 2 types of health checks:

&lt;ul&gt;
&lt;li&gt;EC2 health checks - instances is recreated if the EC2 instance fails to respond to health checks&lt;/li&gt;
&lt;li&gt;ELB health checks - instance is recreated if the ELB health checks fail, meaning that the application is down for whatever reason&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ASG Scaling Policies
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Target Tracking Scaling&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Most simple and easy to setup&lt;/li&gt;
&lt;li&gt;Example: we want the average ASG CPU to stay around 40%&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple/Step Scaling&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Example: 

&lt;ul&gt;
&lt;li&gt;When a CloudWatch alarm is triggered (example average CPU &amp;gt; 70%), then add 2 units&lt;/li&gt;
&lt;li&gt;When a CloudWatch alarm is triggered (example average CPU &amp;lt; 30%), then remove 1 unit&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduled Actions&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Can be used if we can anticipate scaling based on known usage patterns&lt;/li&gt;
&lt;li&gt;Example: increase the min capacity to 10 at 5 PM on Fridays&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Scaling Cool-downs
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The cool-down period helps to ensure that our ASG doesn't launch or terminate additional instances before the previous scaling activity takes effect&lt;/li&gt;
&lt;li&gt;In addition to default cool-down for ASG we can create cool-downs that apply to specific &lt;em&gt;simple scaling policy&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;A scaling-specific cool-down overrides the default cool-down period&lt;/li&gt;
&lt;li&gt;Common use case for scaling-specific cool-downs is when a scale-in policy terminates instances based in a criteria or metric. Because this policy terminates instances, an ASG needs less time to determine wether to terminate additional instances&lt;/li&gt;
&lt;li&gt;If the default cool-down period of 300 seconds is too long, we can reduce costs by applying a scaling-specific cool-down of 180 seconds for example&lt;/li&gt;
&lt;li&gt;If our application is scaling up and down multiple times each hour, we can modify the ASG cool-down timers and the CloudWatch alarm period that triggers the scale&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Suspend and Resume Scaling Processes
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We can suspend and then resume one or more of the scaling processes for our ASG. This can be useful when we want to investigate a configuration problem or other issue with our web application and then make changes to our application, without invoking the scaling processes.&lt;/li&gt;
&lt;li&gt;We can manually move an instance from an ASG and put it in the standby state&lt;/li&gt;
&lt;li&gt;Instances in standby state are still managed by Auto Scaling, are charged as normal, and do not count towards available EC2 instance for workload/application use. Auto scaling does not perform health checks on instances in the standby state. Standby state can be used for performing updates/changes/troubleshooting etc. without health checks being performed or replacement instances being launched.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ASG for Solutions Architects
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ASG Default Termination Policy&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;Find the AZ which has to most number of instances&lt;/li&gt;
&lt;li&gt;If there are multiple instances to choose from, delete the one with the oldest launch configuration&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;Lifecycle Hooks

&lt;ul&gt;
&lt;li&gt;By default as soon as an instance is launched in an ASG, the instance goes in service&lt;/li&gt;
&lt;li&gt;ASGs provide the ability to perform extra steps before the instance goes in service&lt;/li&gt;
&lt;li&gt;Also, we have he ability to perform some actions before the instance is terminated&lt;/li&gt;
&lt;li&gt;Lifecycle hooks diagram: &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html"&gt;https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Launch Templates vs. Launch Configurations

&lt;ul&gt;
&lt;li&gt;Both allow to specify the AMI, the instance type, a key par, security groups and the other parameters that we use to launch EC2 instances (tags, user-data, etc.)&lt;/li&gt;
&lt;li&gt;Launch Configurations are considered to be legacy:

&lt;ul&gt;
&lt;li&gt;They must be recreated every time&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Launch Templates:

&lt;ul&gt;
&lt;li&gt;They can have multiple versions&lt;/li&gt;
&lt;li&gt;They allow parameter subsets used for partial configuration for re-use and inheritance&lt;/li&gt;
&lt;li&gt;We can provision both On-Demand and Spot instances (or a mix of two)&lt;/li&gt;
&lt;li&gt;We can use the T2 unlimited burst feature&lt;/li&gt;
&lt;li&gt;Recommended by AWS&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>asg</category>
      <category>autoscalinggroups</category>
    </item>
    <item>
      <title>Elastic Load Balancers</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:29:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/elastic-load-balancers-36jg</link>
      <guid>https://dev.to/aws-builders/elastic-load-balancers-36jg</guid>
      <description>&lt;h4&gt;
  
  
  Scalability and High Availability
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Scalability means that an a system can handle greater loads by adapting&lt;/li&gt;
&lt;li&gt;We can distinguish two types of scalability strategies:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vertical Scalability&lt;/strong&gt; (scale up/down)

&lt;ul&gt;
&lt;li&gt;Increase the size of the current instance (ex. from a t2.micro instance migrate to a a t2.large one)&lt;/li&gt;
&lt;li&gt;Vertical scalability is common for non distributed systems, such as databases&lt;/li&gt;
&lt;li&gt;RDS, ElastiCache are services that can scale vertically&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Horizontal Scalability&lt;/strong&gt; (scale out/in)

&lt;ul&gt;
&lt;li&gt;Increase the number of instances on which the application runs&lt;/li&gt;
&lt;li&gt;Horizontal scaling implies having a distributed system&lt;/li&gt;
&lt;li&gt;It's easy to scale horizontally thanks to could offerings such as EC2&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;High Availability means running our application in at least 2 data centers (AZs)

&lt;ul&gt;
&lt;li&gt;The goal of high availability is to survive a data center loss&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;High Availability can be:

&lt;ul&gt;
&lt;li&gt;Passive&lt;/li&gt;
&lt;li&gt;Active&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Load balancers can scale but not instantaneously - contact AWS for a "warm-up"&lt;/li&gt;
&lt;li&gt;Troubleshooting:

&lt;ul&gt;
&lt;li&gt;4xx errors are client induced errors&lt;/li&gt;
&lt;li&gt;5xx errors are application induced errors (server side errors)&lt;/li&gt;
&lt;li&gt;Error 503 means that the load balancer is at capacity or no registered targets can be found&lt;/li&gt;
&lt;li&gt;If the load balancer can't connect to the application, it most likely means that the security group blocks the connection&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Monitoring:

&lt;ul&gt;
&lt;li&gt;ELB access logs will log all the access requests to the LB&lt;/li&gt;
&lt;li&gt;CloudWatch Metrics will give aggregate statistics (example: connections counts)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Load Balancing Basics
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Load balancers are servers that forward internet traffic to multiple other servers (most likely EC2 instances)&lt;/li&gt;
&lt;li&gt;Why use load balances?

&lt;ul&gt;
&lt;li&gt;Spear load across multiple downstream instances&lt;/li&gt;
&lt;li&gt;Expose a single point of access (DNS) to the application&lt;/li&gt;
&lt;li&gt;Seamlessly handle failures of downstream instances (by using health checks)&lt;/li&gt;
&lt;li&gt;Do regular health checks to registered instances&lt;/li&gt;
&lt;li&gt;Provide SSL termination (HTTPS) for the website hosted on the downstream instances&lt;/li&gt;
&lt;li&gt;Enforce stickiness for cookies&lt;/li&gt;
&lt;li&gt;High availability across availability zones (load balancer can be spread across multiple AZs, not regions!!!)&lt;/li&gt;
&lt;li&gt;Cleanly separate public traffic from private traffic&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;An ELB (Elastic Load Balancer) is a &lt;strong&gt;managed load balancer&lt;/strong&gt; which means:

&lt;ul&gt;
&lt;li&gt;AWS guarantees that it will be working&lt;/li&gt;
&lt;li&gt;AWS takes care of upgrades, maintenance and high availability&lt;/li&gt;
&lt;li&gt;An ELB provides a few configuration options for us also&lt;/li&gt;
&lt;li&gt;It costs less to setup our custom load balancer, but it will be a lot more effort to maintain on the long run&lt;/li&gt;
&lt;li&gt;An ELB is integrated with many AWS offering/services, it will be more flexible than a custom LB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Health Checks
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;They enable for a LB to know if an instance for which traffic is forwarded is available to reply to requests&lt;/li&gt;
&lt;li&gt;The health checks is done using a port and a route (usually /health)&lt;/li&gt;
&lt;li&gt;If the response is not 200, then the instance is considered unhealthy&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Types of Load Balancers on AWS
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS provides 4 type of load balancers:

&lt;ul&gt;
&lt;li&gt;Classic Load Balancer (v1 - old generation): supports HTTP, HTTPS and TCP&lt;/li&gt;
&lt;li&gt;Application Load Balancer (v2 - new generation): supports HTTP, HTTPS and WebSockets&lt;/li&gt;
&lt;li&gt;Network Load Balancer (v2 - new generation): supports TCP, TLS (secure TCP) and UDP&lt;/li&gt;
&lt;li&gt;Gateway Load Balancer (new generation - see VPC section of the notes)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;It is recommended to use the new versions&lt;/li&gt;
&lt;li&gt;We can setup &lt;strong&gt;internal&lt;/strong&gt; (private) and &lt;strong&gt;external&lt;/strong&gt; (public) load balancers on AWS&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Classic Load Balancers (CLB)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;They support 2 types of connections: TCP (layer 4) and HTTP(S) (layer 7)&lt;/li&gt;
&lt;li&gt;Health checks are either TCP or HTTP based&lt;/li&gt;
&lt;li&gt;CLBs provide a fixed hostname: XXX.region.elb.amazonaws.com&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Application Load Balancers (ALB)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;They are a layer 7 type load balancers (only HTTP or HTTPS)&lt;/li&gt;
&lt;li&gt;They allow load balancing to multiple HTTP applications across multiple machines (target groups). Also they allow to load balance to multiple applications on the same EC2 instance (useful in case of containers)&lt;/li&gt;
&lt;li&gt;They have support for HTTP2 and WebSockets.&lt;/li&gt;
&lt;li&gt;They support redirects, example for HTTP to HTTPS&lt;/li&gt;
&lt;li&gt;They provide routing tables to different target groups:

&lt;ul&gt;
&lt;li&gt;Routing based on path in URL&lt;/li&gt;
&lt;li&gt;Routing based on the hostname&lt;/li&gt;
&lt;li&gt;Routing based on query strings an headers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;ALBs are great fit for micros-services and container based applications&lt;/li&gt;
&lt;li&gt;ALBs have port mapping features to redirect to dynamic ports in case ECS&lt;/li&gt;
&lt;li&gt;Target groups can contain:

&lt;ul&gt;
&lt;li&gt;EC2 instances (can be managed by an Auto Scaling Group)&lt;/li&gt;
&lt;li&gt;ECS tasks (managed by ECS itself)&lt;/li&gt;
&lt;li&gt;Lambda Functions -  HTTP request is translated to a JSON event&lt;/li&gt;
&lt;li&gt;IP Addresses - must be private IP addresses&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;ALBs also provide a fixed hostname (same as CLBs): XXX.region.elb.amazonaws.com&lt;/li&gt;
&lt;li&gt;The application servers behind the LB can not see the IP of the client who accessing them directly, but they can retrieve for &lt;strong&gt;X-Forwarded-For&lt;/strong&gt; header. The port can be fetched from &lt;strong&gt;X-Forwarded-Port&lt;/strong&gt; and the protocol from &lt;strong&gt;X-Forwarded-Proto&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Network Load Balancers (NLB)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Network load balancers (layer 4) allow to:

&lt;ul&gt;
&lt;li&gt;Forward TCP and UDP traffic to the registered instances&lt;/li&gt;
&lt;li&gt;Handle millions of requests per second&lt;/li&gt;
&lt;li&gt;Less latency ~100ms (vs 400ms for ALB)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;NLBs have &lt;strong&gt;one static IP per AZ&lt;/strong&gt; and supports Elastic IPs (can be used when whitelisting is necessary)&lt;/li&gt;
&lt;li&gt;Use case for NLBs: NLBs are used for extreme performance in case of TCP or UDP traffic (example: video games)&lt;/li&gt;
&lt;li&gt;Instances behind an NLB don't see traffic coming from the load balancer, they see traffic as it was coming from the outside world =&amp;gt; no security group is attached to LB =&amp;gt; security group attached to the target EC2 instance should be changed to allow traffic from the outside (example: 0.0.0.0/0, on port 80)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Stickiness
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is possible to implement stickiness in case of CLB and ALB load balancers&lt;/li&gt;
&lt;li&gt;Stickiness means that the traffic from the same client will be forwarded to the same target instance&lt;/li&gt;
&lt;li&gt;Stickiness works by adding a cookie to the request which has an expiration date for controlling the
stickiness period&lt;/li&gt;
&lt;li&gt;Possible use case for stickiness: we have to make sure that the user does not lose his session data&lt;/li&gt;
&lt;li&gt;Enabling stickiness may bring imbalance to the load over the downstream target instances&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cross-Zone Load Balancing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;With Cross-Zone Load Balancing enabled each LB instance distributes traffic evenly across multiple AZs&lt;/li&gt;
&lt;li&gt;Otherwise, ech LB node distributes requests evenly only in the AZ where it is registered&lt;/li&gt;
&lt;li&gt;Classic Load Balancer: 

&lt;ul&gt;
&lt;li&gt;Cross-zone load balancing is disabled by default&lt;/li&gt;
&lt;li&gt;No additional charges for cross zone load balancing if the feature is enabled&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Application Load Balancer: 

&lt;ul&gt;
&lt;li&gt;Cross-zone load balancing is always on, can not be disabled&lt;/li&gt;
&lt;li&gt;No charges applied for cross zone load balancing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Network Load Balancer:

&lt;ul&gt;
&lt;li&gt;Cross-zone load balancing is disabled by default&lt;/li&gt;
&lt;li&gt;Additional charges apply if the feature is enabled&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SSL/TLS Certificates
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;An SSL certificate allows traffic to be encrypted between the clients and the load balancers. This ia called encryption in transit or in-flight encryption&lt;/li&gt;
&lt;li&gt;SSL - Secure Socket Layer&lt;/li&gt;
&lt;li&gt;TLS (newer version of SSL) - Transport Layer Security&lt;/li&gt;
&lt;li&gt;Nowadays TLS are mainly used, but we are still referring to it as SSL&lt;/li&gt;
&lt;li&gt;Public SSL certificates are issued by a Certificate Authority&lt;/li&gt;
&lt;li&gt;SSL certificates have an expiration dates and they must be renewed&lt;/li&gt;
&lt;li&gt;SSL termination: client can talk with a LB using HTTPS but internal traffic can be routed to a target using HTTP&lt;/li&gt;
&lt;li&gt;Load balancer can load an X.509 certificate (which is a SSL/TLS server certificate)&lt;/li&gt;
&lt;li&gt;We can manage certificates in AWS using ACM (AWS Certificate Manager)&lt;/li&gt;
&lt;li&gt;HTTPS Listener:

&lt;ul&gt;
&lt;li&gt;We must specify a default certificate&lt;/li&gt;
&lt;li&gt;We can add an optional list of certificates to support multiple domains&lt;/li&gt;
&lt;li&gt;Clients can use SNI (Server Name Indication) to specify which hostname want to reach&lt;/li&gt;
&lt;li&gt;Ability to specify a security policy to support older versions of SSL/TLS (for legacy clients like Internet Explorer 5 lol:) )&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  SNI - Server Name Indication
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;SNI solves the problem of being able to load multiple SSL certificates onto one web server&lt;/li&gt;
&lt;li&gt;There is a newer protocol which requires the client to indicate the hostname of the target server in the initial SSL handshake

&lt;ul&gt;
&lt;li&gt;In case of AWS this only works for ALB, NLB and CloudFront (no CLB!)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ELB - Connection Draining
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Feature naming:

&lt;ul&gt;
&lt;li&gt;In case of a CLB is called Connections Draining&lt;/li&gt;
&lt;li&gt;If we have a target group: (ALB, NLB) it is called Deregistration Delay&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Connection draining is the time to complete in-flight requests while the instance is de-registering or unhealthy. Basically it allows the instance to terminate whatever it was doing&lt;/li&gt;
&lt;li&gt;The LB will stop sending new requests to the target instance which is in progress of de-registering&lt;/li&gt;
&lt;li&gt;The time period of the connection draining can be set between 1 seconds to 3600 seconds&lt;/li&gt;
&lt;li&gt;It also can be disabled (set the period to 0 seconds)&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>AWS EC2</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Sat, 03 Sep 2022 09:27:47 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-ec2-1ch7</link>
      <guid>https://dev.to/aws-builders/aws-ec2-1ch7</guid>
      <description>&lt;ul&gt;
&lt;li&gt;It mainly consists of the following capabilities:

&lt;ul&gt;
&lt;li&gt;Renting virtual machines in the cloud (EC2)&lt;/li&gt;
&lt;li&gt;Storing data on virtual drives (EBS)&lt;/li&gt;
&lt;li&gt;Distributing load across multiple machines (ELB)&lt;/li&gt;
&lt;li&gt;Scaling the services using an auto-scaling group (ASG)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Introduction to Security Groups (SG)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Security Groups are the fundamental of networking security in AWS&lt;/li&gt;
&lt;li&gt;They control how traffic is allowed into or out of EC2 machines&lt;/li&gt;
&lt;li&gt;Basically they are firewalls&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security Groups Deep Dive
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Security groups regulate:

&lt;ul&gt;
&lt;li&gt;Access to ports&lt;/li&gt;
&lt;li&gt;Authorized IP ranges - IPv4 and IPv6&lt;/li&gt;
&lt;li&gt;Control of inbound and outbound network traffic&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Security groups can be attached to multiple instances&lt;/li&gt;
&lt;li&gt;They are locked down to a region/VPC combination&lt;/li&gt;
&lt;li&gt;They do live outside of the EC2 instances - if traffic is blocked the EC2 instance wont be able to see it&lt;/li&gt;
&lt;li&gt;&lt;em&gt;It is good to maintain one separate security group for SSH access&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;If the request for the application times out, it is most likely a security group issue&lt;/li&gt;
&lt;li&gt;If for the request the response is a "connection refused" error, then it means that it is an application error and the traffic went through the security group&lt;/li&gt;
&lt;li&gt;By default all inbound traffic is &lt;strong&gt;blocked&lt;/strong&gt; and all outbound traffic is &lt;strong&gt;authorized&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A security group can allow traffic from another security group. A security group can reference another security group, meaning that it is no need to reference the IP of the instance to which the security group is attached&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Elastic IP
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;When an EC2 instance is stopped and restarted, it may change its public IP address&lt;/li&gt;
&lt;li&gt;In case there is a need for a fixed IP for the instance, Elastic IP is the solution&lt;/li&gt;
&lt;li&gt;An Elastic IP is a public IP the user owns as long as the IP is not deleted by the owner&lt;/li&gt;
&lt;li&gt;With Elastic IP address, we can mask the failure of an instance by  rapidly remapping the address to another instance &lt;/li&gt;
&lt;li&gt;AWS provides a limited number of 5 Elastic IPs (soft limit)&lt;/li&gt;
&lt;li&gt;Overall it is recommended to avoid using Elastic IP, because:

&lt;ul&gt;
&lt;li&gt;They often reflect pool arhcitectural decisions&lt;/li&gt;
&lt;li&gt;Instead, us e a random public IP and register a DNS name to it&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 User Data
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is possible to bootstrap (run commands for setup) an EC2 instance using EC2 User data script&lt;/li&gt;
&lt;li&gt;The user data script is only run once at the first start of the instance&lt;/li&gt;
&lt;li&gt;EC2 user data is used to automate boot tasks such as:

&lt;ul&gt;
&lt;li&gt;Installing update&lt;/li&gt;
&lt;li&gt;Installing software&lt;/li&gt;
&lt;li&gt;Downloading common files from the internet&lt;/li&gt;
&lt;li&gt;Any other start-up task&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;THe EC2 user data scripts run with root user privileges&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Instance Launch Types
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;On Demand Instances: short workload, predictable pricing&lt;/li&gt;
&lt;li&gt;Reserved: known amount of time (minimum 1 year). Types of reserved instances:

&lt;ul&gt;
&lt;li&gt;Reserved Instances: recommended long workloads&lt;/li&gt;
&lt;li&gt;Convertible Reserved Instances: recommended for long workloads with flexible instance types&lt;/li&gt;
&lt;li&gt;Scheduled Reserved Instances: instances reserved for a longer period used at a certain schedule &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Spot Instances: for short workloads, they are cheap, but there is a risk of losing the instance while running&lt;/li&gt;
&lt;li&gt;Dedicated Instances: no other customer will share the underlying hardware&lt;/li&gt;
&lt;li&gt;Dedicated Hosts: book an entire physical server, can control the placement of the instance&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 On Demand
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pay for what we use, billing is done per second after the first minute&lt;/li&gt;
&lt;li&gt;Hast the higher cost but it does not require upfront payment&lt;/li&gt;
&lt;li&gt;Recommended for short-term and uninterrupted workloads, when we can't predict how the application will behave&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Reserved Instances
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Up to 75% discount compared to On-demand&lt;/li&gt;
&lt;li&gt;Pay upfront for a given time, implies long term commitment&lt;/li&gt;
&lt;li&gt;Reserved period can be 1 or 3 years&lt;/li&gt;
&lt;li&gt;We can reserve a specific instance type&lt;/li&gt;
&lt;li&gt;Recommended for steady state usage applications (example: database)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convertible Reserved Instances&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The instance type can be changed&lt;/li&gt;
&lt;li&gt;Up to 54% discount&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduled Reserved Instances&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The instance can be launched within a time window&lt;/li&gt;
&lt;li&gt;It is recommended when is required for an instance to run at certain times of the day/week/month&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Spot Instances
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We can get up to 90% discount compared to on-demand instances&lt;/li&gt;
&lt;li&gt;It is recommended for workloads which are resilient to failure since the instance can be stopped by the AWS if our max price is less then the current spot price&lt;/li&gt;
&lt;li&gt;Not recommended for critical jobs or databases&lt;/li&gt;
&lt;li&gt;Great combination: reserved instances for baseline performance + on-demand and spot instances for peak times&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Dedicated Hosts
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Physical dedicated EC2 server&lt;/li&gt;
&lt;li&gt;Provides full control of the EC2 instance placement&lt;/li&gt;
&lt;li&gt;It provides visibility to the underlying sockets/physical cores of the hardware&lt;/li&gt;
&lt;li&gt;It requires a 3 year period reservation&lt;/li&gt;
&lt;li&gt;Useful for software that have complicated licensing models or for companies that have strong regulatory compliance needs&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Dedicated Instances
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Instances running on hardware that is dedicated to a single account&lt;/li&gt;
&lt;li&gt;Instances may share hardware with other instances from the same account&lt;/li&gt;
&lt;li&gt;No control over instance placement&lt;/li&gt;
&lt;li&gt;Gives per instance billing&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Spot Instances - Deep Dive
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;With a spot instance we can get a discount up to 90%&lt;/li&gt;
&lt;li&gt;We define a max spot price and get the instance if the current spot price &amp;lt; max spot price&lt;/li&gt;
&lt;li&gt;The hourly spot price varies based on offer and capacity&lt;/li&gt;
&lt;li&gt;If the current spot price goes over the selected max spot price we can choose to stop or terminate the instance within the next 2 minutes&lt;/li&gt;
&lt;li&gt;Spot Block: block a spot instance during a specified time frame (1 to 6 hours) without interruptions. In rare situations an instance may be reclaimed&lt;/li&gt;
&lt;li&gt;Spot request - with a spot request we define:

&lt;ul&gt;
&lt;li&gt;Maximum price&lt;/li&gt;
&lt;li&gt;Desired number of instances&lt;/li&gt;
&lt;li&gt;Launch specifications&lt;/li&gt;
&lt;li&gt;Request type:

&lt;ul&gt;
&lt;li&gt;One time request: as soon as the spot request is fulfilled the instances will be launched an the request will go away&lt;/li&gt;
&lt;li&gt;Persistence request: we want the desired number of instances to be valid as long as the spot request is active. In case the spot instances are reclaimed, the spot request will try to restart the instances as soon as the price goes down&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Cancel a spot instance: we can cancel spot instance requests if it is in open, active or disabled state (not failed, canceled, closed)&lt;/li&gt;
&lt;li&gt;Canceling a spot request does not terminate the launched instances. If we want to terminate a spot instance for good, first we have to cancel the spot request and the we can terminate the associated instances, otherwise the spot request may relaunch them&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Spot Fleet
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Spot Fleet is a set of spot instances and optional on-demand instances&lt;/li&gt;
&lt;li&gt;The spot fleet will try to meet the target capacity with price constraints&lt;/li&gt;
&lt;li&gt;AWS will launch instances from a launch pool, meaning we have to define the instance type, OS, AZ for a launch pool&lt;/li&gt;
&lt;li&gt;We can have multiple launch pools from within the best one is chosen&lt;/li&gt;
&lt;li&gt;If a spot a fleet reaches capacity or max cost, no more new instances are launched&lt;/li&gt;
&lt;li&gt;Strategies to allocate spot instances in a spot fleet:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;lowerPrice&lt;/strong&gt;: the instances will be launched from the pool with the lowest price&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;diversified&lt;/strong&gt;: launched instances will be distributed from all the defined pools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;capacityOptimized&lt;/strong&gt;: launch with the optimal capacity based on the number of instances&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Instance Types
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;R: applications that needs a lot of RAM - in-memory cache&lt;/li&gt;
&lt;li&gt;C: applications that need good CPU -  compute/database&lt;/li&gt;
&lt;li&gt;M: applications that are balanced - general / web app&lt;/li&gt;
&lt;li&gt;I: applications that need good local I/O - databases&lt;/li&gt;
&lt;li&gt;G: applications that need GPU - video rendering / ML&lt;/li&gt;
&lt;li&gt;T2/T3 - burstable instances&lt;/li&gt;
&lt;li&gt;T2/T3 unlimited: unlimited burst&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Bustable Instances (T2/T3)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Overall the performance of the instance is OK&lt;/li&gt;
&lt;li&gt;When the machine needs to process something unexpected (a spike load), it can burst and CPU can be very performant&lt;/li&gt;
&lt;li&gt;If the machine bursts, it utilizes "burst credits"&lt;/li&gt;
&lt;li&gt;If all the credits are gone, the CPU becomes bad&lt;/li&gt;
&lt;li&gt;If the machine stops bursting, credits are accumulated over time&lt;/li&gt;
&lt;li&gt;Credit usage / credit balance of a burstable instance can be seen in CloudWatch&lt;/li&gt;
&lt;li&gt;CPU credits: bigger the instance the faster credit is earned&lt;/li&gt;
&lt;li&gt;T2/T3 Unlimited: extra money can be payed in case the burst credits are used. There wont be any performance loss&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AMI
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS comes with lots of base images&lt;/li&gt;
&lt;li&gt;Images can be customized ar runtime with EC2 User data&lt;/li&gt;
&lt;li&gt;In case of more granular customization AWS allows creating own images - this is called an AMI&lt;/li&gt;
&lt;li&gt;Advantages of a custom AMI:

&lt;ul&gt;
&lt;li&gt;Pre-install packages&lt;/li&gt;
&lt;li&gt;Faster boot time (on need for the instance to execute the scripts from the user data)&lt;/li&gt;
&lt;li&gt;Machine configured with monitoring/enterprise software&lt;/li&gt;
&lt;li&gt;Security concerns - control over the machines in the network&lt;/li&gt;
&lt;li&gt;Control over maintenance&lt;/li&gt;
&lt;li&gt;Active Directory out of the box&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;An AMI is built for a specific region (NOT GLOBAL!)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Public AMI
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We can leverage AMIs from other people&lt;/li&gt;
&lt;li&gt;We can also pay for other people's AMI by the hour, basically renting the AMI form the AWS Marketplace&lt;/li&gt;
&lt;li&gt;Warning: do not use AMI which is not trustworthy!&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  AMI Storage
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;An AMI takes space and they are stored in S3&lt;/li&gt;
&lt;li&gt;AMIs by default are private and locker for account/region&lt;/li&gt;
&lt;li&gt;We can make our AMIs public and share them with other people or sell them on the Marketplace&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cross Account AMI Sharing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is possible the share AMI with another AWS account&lt;/li&gt;
&lt;li&gt;Sharing an AMI does not affect the ownership of the AMI&lt;/li&gt;
&lt;li&gt;If a shared AMI is copied, than the account who did the copy becomes the owner&lt;/li&gt;
&lt;li&gt;To copy an AMI that was shared from another account, the owner of the source AMI must grant read permissions for the storage that backs the AMI, either the associated EBS snapshot or an associated S3 bucket&lt;/li&gt;
&lt;li&gt;Limits:

&lt;ul&gt;
&lt;li&gt;An encrypted AMI can not be copied. Instead, if the underlying snapshot and encryption key where shared, we can copy the snapshot while re-encrypting it with a key of our own. The copied snapshot can be registered as a new AMI&lt;/li&gt;
&lt;li&gt;We cant copy an AMI with an associated &lt;strong&gt;billingProduct&lt;/strong&gt; code that was shared with us from another account. This includes Windows AMIs and AMIs from the AWS Marketplace. To copy a shared AMI with &lt;strong&gt;billingProduct&lt;/strong&gt; code, we have to launch an EC2 instance from our account using the shared AMI and then create an AMI from source&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Placement Groups
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sometimes we want to control how the EC2 instances are placed in the AWS infrastructure&lt;/li&gt;
&lt;li&gt;When we create a placement group, we can specify one of the following placement strategies:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster&lt;/strong&gt; - cluster instances into a low-latency group in a single AZ&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spread&lt;/strong&gt; - spread instances across underlying hardware (max 7 instances per group per AZ)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition&lt;/strong&gt; - spread instances across many different partitions (which rely on different sets of racks) within an AZ. Scale to 100s of EC2 instances per group (Hadoop, Cassandra, Kafka)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Placement Groups - Cluster
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pros: Great network (10Gbps bandwidth between instances)&lt;/li&gt;
&lt;li&gt;Cons: if the rack fails, all instances fail at the time&lt;/li&gt;
&lt;li&gt;Use cases:

&lt;ul&gt;
&lt;li&gt;Big data job that needs to complete fast&lt;/li&gt;
&lt;li&gt;Application that needs extremely low latency and high network throughput&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Placement Groups - Spread
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pros:

&lt;ul&gt;
&lt;li&gt;Can span across multiple AZs&lt;/li&gt;
&lt;li&gt;Reduces risk for simultaneous failure&lt;/li&gt;
&lt;li&gt;EC2 instances are on different hardware&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Cons:

&lt;ul&gt;
&lt;li&gt;Limited to 7 instances per AZ per placement group&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Use case:

&lt;ul&gt;
&lt;li&gt;Application that needs to maximize high availability&lt;/li&gt;
&lt;li&gt;Critical applications where each instance must be isolated from failure&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Placement Groups - Partitions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pros:

&lt;ul&gt;
&lt;li&gt;Up to 7 partitions per AZ&lt;/li&gt;
&lt;li&gt;Can have hundreds of EC2 instances per AZ&lt;/li&gt;
&lt;li&gt;The instances in a partition do not share racks with the instances from other partitions&lt;/li&gt;
&lt;li&gt;A partition failure can effect many instances but they wont affect other partitions&lt;/li&gt;
&lt;li&gt;Instances get access to the partition information as metadata&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Use cases: HDFS, HBase, Cassandra, Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Elastic Network Interfaces - ENI
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Logical component in a VPC that represents a virtual network card&lt;/li&gt;
&lt;li&gt;An ENI can have the following attributes:

&lt;ul&gt;
&lt;li&gt;Primary private IPv4 address, one or more secondary IPv4 addresses&lt;/li&gt;
&lt;li&gt;One Elastic IP (IPv4) per private IPv4&lt;/li&gt;
&lt;li&gt;One Public IPv4&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;ENI instances can be created independently from an EC2 instance&lt;/li&gt;
&lt;li&gt;We can attach them on the fly to an EC2 instances or move them from one to another (useful for failover)&lt;/li&gt;
&lt;li&gt;ENIs are bound to a specific available zone&lt;/li&gt;
&lt;li&gt;ENIs can have security group attached to them&lt;/li&gt;
&lt;li&gt;EC2 instances usually have a primary ENI (eth0). In case we attach a secondary ENI, eth1 interface will be available. The primary ENI can not be detached.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 Hibernate
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We can stop or terminate EC2 instances:

&lt;ul&gt;
&lt;li&gt;If an instance is stopped: the data on the disk (EBS) is kept intact&lt;/li&gt;
&lt;li&gt;If an instance is terminated: any root EBS volume will also gets destroyed&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;On start, the following happens in case of an EC2 instance:

&lt;ul&gt;
&lt;li&gt;Fist start: the OS boots and EC2 User data script is executed&lt;/li&gt;
&lt;li&gt;Following starts: the OS boots&lt;/li&gt;
&lt;li&gt;After the OS boot the applications start, cache gets warmed up, etc. which may take some time&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;EC2 Hibernate:

&lt;ul&gt;
&lt;li&gt;All the data from RAM is preserved on shut-down&lt;/li&gt;
&lt;li&gt;The instance boot is faster&lt;/li&gt;
&lt;li&gt;Under the hood: the RAM state is written to a file in the root EBS volume&lt;/li&gt;
&lt;li&gt;The root EBS volume must be encrypted&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Supported instance types for hibernate: C3, C4, C5, M3, M4, M5, R3, R4, R5&lt;/li&gt;
&lt;li&gt;Supported OS types: Amazon Linux 1 and 2, Windows&lt;/li&gt;
&lt;li&gt;Instance RAM size: must be less then 150 GB&lt;/li&gt;
&lt;li&gt;Bare metal instances do not support hibernate&lt;/li&gt;
&lt;li&gt;Root volume: must be EBS, encrypted, not instance store. And it must be large enough&lt;/li&gt;
&lt;li&gt;Hibernate is available for on-demand and reserved instances&lt;/li&gt;
&lt;li&gt;An instance can not hibernate for more than 60 days&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  EC2 for Solution Architects
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EC2 instances are billed by the second, t2.micro is free tier&lt;/li&gt;
&lt;li&gt;On Linux/Mac we can use SSH, on Windows Putty or SSH&lt;/li&gt;
&lt;li&gt;SSH is using port 22, the security group must allow our IP to be able to connect&lt;/li&gt;
&lt;li&gt;In cas of a timeout, it is most likely a security group issue&lt;/li&gt;
&lt;li&gt;Permission for SSH key =&amp;gt; chmod 0400&lt;/li&gt;
&lt;li&gt;Security groups can reference other security groups instead of IP addresses&lt;/li&gt;
&lt;li&gt;EC2 instance can be customized at boot using EC2 User Data&lt;/li&gt;
&lt;li&gt;4 EC2 launch modes:

&lt;ul&gt;
&lt;li&gt;On-demand&lt;/li&gt;
&lt;li&gt;Reserved&lt;/li&gt;
&lt;li&gt;Spot&lt;/li&gt;
&lt;li&gt;Dedicated hosts&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;We can create AMIs to pre-install software&lt;/li&gt;
&lt;li&gt;An AMI can be copied through accounts and regions&lt;/li&gt;
&lt;li&gt;EC2 instances can be started in placement groups:

&lt;ul&gt;
&lt;li&gt;Cluster&lt;/li&gt;
&lt;li&gt;Spread&lt;/li&gt;
&lt;li&gt;Partition&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ec2</category>
      <category>aws</category>
      <category>awsec2</category>
    </item>
    <item>
      <title>Git basic</title>
      <dc:creator>Van Hoang Kha</dc:creator>
      <pubDate>Tue, 30 Aug 2022 08:56:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/git-basic-486j</link>
      <guid>https://dev.to/aws-builders/git-basic-486j</guid>
      <description>&lt;h1&gt;
  
  
  Git and Git Flow 
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sNvqLPDP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy653ag6mze5qljqm657.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sNvqLPDP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy653ag6mze5qljqm657.png" alt="Image description" width="880" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Git Cheat Sheet
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Index
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Set Up&lt;/li&gt;
&lt;li&gt;Configuration Files&lt;/li&gt;
&lt;li&gt;Create&lt;/li&gt;
&lt;li&gt;Local Changes&lt;/li&gt;
&lt;li&gt;Search&lt;/li&gt;
&lt;li&gt;Commit History&lt;/li&gt;
&lt;li&gt;Branches &amp;amp; Tags&lt;/li&gt;
&lt;li&gt;Update &amp;amp; Publish&lt;/li&gt;
&lt;li&gt;Merge &amp;amp; Rebase&lt;/li&gt;
&lt;li&gt;Undo&lt;/li&gt;
&lt;li&gt;Git Flow&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Show current configuration:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show repository configuration:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --local --list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show global configuration:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --global --list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show system configuration:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --system --list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Set a name that is identifiable for credit when review version history:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --global user.name “[firstname lastname]”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Set an email address that will be associated with each history marker:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --global user.email “[valid-email]”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Set automatic command line coloring for Git for easy reviewing:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --global color.ui auto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Set global editor for commit
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git config --global core.editor vi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Configuration Files
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Repository specific configuration file [--local]:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;repo&amp;gt;/.git/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  User-specific configuration file [--global]:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.gitconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  System-wide configuration file [--system]:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/gitconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Create
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Clone an existing repository:
&lt;/h5&gt;

&lt;p&gt;There are two ways:&lt;/p&gt;

&lt;p&gt;Via SSH&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone ssh://user@domain.com/repo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Via HTTP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone http://domain.com/user/repo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Create a new local repository in the current directory:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Create a new local repository in a specific directory:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git init &amp;lt;directory&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Local Changes
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Changes in working directory:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Changes to tracked files:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git diff
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  See changes/difference of a specific file:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git diff &amp;lt;file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Add all current changes to the next commit:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Add some changes in &amp;lt;file&amp;gt; to the next commit:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add -p &amp;lt;file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Add only the mentioned files to the next commit:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add &amp;lt;filename1&amp;gt; &amp;lt;filename2&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Commit all local changes in tracked files:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Commit previously staged changes:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Commit with message:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit -m 'message here'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Commit skipping the staging area and adding message:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit -am 'message here'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Commit to some previous date:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit --date="`date --date='n day ago'`" -am "&amp;lt;Commit Message Here&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Change last commit:&lt;br&gt;
&lt;/h5&gt;

&lt;p&gt;&lt;em&gt;Don't amend published commits!&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit -a --amend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Amend with last commit but use the previous commit log message
&lt;/h5&gt;

&lt;p&gt;&lt;em&gt;Don't amend published commits!&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git commit &lt;span class="nt"&gt;--amend&lt;/span&gt; &lt;span class="nt"&gt;--no-edit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Change committer date of last commit:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GIT_COMMITTER_DATE="date" git commit --amend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Change Author date of last commit:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git commit &lt;span class="nt"&gt;--amend&lt;/span&gt; &lt;span class="nt"&gt;--date&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"date"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Move uncommitted changes from current branch to some other branch:&lt;br&gt;
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash
$ git checkout branch2
$ git stash pop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Restore stashed changes back to current branch:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git stash apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Restore particular stash back to current branch:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;{stash_number}&lt;/em&gt; can be obtained from &lt;code&gt;git stash list&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git stash apply stash@&lt;span class="o"&gt;{&lt;/span&gt;stash_number&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Remove the last set of stashed changes:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git stash drop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Search
&lt;/h2&gt;
&lt;h5&gt;
  
  
  A text search on all files in the directory:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git grep "Hello"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  In any version of a text search:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git grep "Hello" v2.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show commits that introduced a specific keyword
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log -S 'keyword'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show commits that introduced a specific keyword (using a regular expression)
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log -S 'keyword' --pickaxe-regex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Commit History
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Show all commits, starting with newest (it'll show the hash, author information, date of commit and title of the commit):
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show all the commits(it'll show just the commit hash and the commit message):
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log --oneline
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show all commits of a specific user:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log --author="username"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show changes over time for a specific file:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log -p &amp;lt;file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Display commits that are present only in remote/branch in right side
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log --oneline &amp;lt;origin/master&amp;gt;..&amp;lt;remote/master&amp;gt; --left-right
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Who changed, what and when in &amp;lt;file&amp;gt;:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git blame &amp;lt;file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show Reference log:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reflog show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Delete Reference log:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reflog delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Move / Rename
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Rename a file:
&lt;/h5&gt;

&lt;p&gt;Rename Index.txt to Index.html&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git mv Index.txt Index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Branches &amp;amp; Tags
&lt;/h2&gt;
&lt;h5&gt;
  
  
  List all local branches:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  List local/remote branches
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  List all remote branches:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch -r
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Switch HEAD branch:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git checkout &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Checkout single file from different branch
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git checkout &amp;lt;branch&amp;gt; -- &amp;lt;filename&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Create and switch new branch:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git checkout -b &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Switch to the previous branch, without saying the name explicitly:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git checkout -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Create a new branch from an exiting branch and switch to new branch:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git checkout -b &amp;lt;new_branch&amp;gt; &amp;lt;existing_branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Checkout and create a new branch from existing commit
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git checkout &amp;lt;commit-hash&amp;gt; -b &amp;lt;new_branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Create a new branch based on your current HEAD:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch &amp;lt;new-branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Create a new tracking branch based on a remote branch:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch --track &amp;lt;new-branch&amp;gt; &amp;lt;remote-branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Delete a local branch:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch -d &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Rename current branch to new branch name
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git branch &lt;span class="nt"&gt;-m&lt;/span&gt; &amp;lt;new_branch_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Force delete a local branch:
&lt;/h5&gt;

&lt;p&gt;&lt;em&gt;You will lose unmerged changes!&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch -D &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Mark &lt;code&gt;HEAD&lt;/code&gt; with a tag:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git tag &amp;lt;tag-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Mark &lt;code&gt;HEAD&lt;/code&gt; with a tag and open the editor to include a message:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git tag -a &amp;lt;tag-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Mark &lt;code&gt;HEAD&lt;/code&gt; with a tag that includes a message:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git tag &amp;lt;tag-name&amp;gt; -am 'message here'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  List all tags:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  List all tags with their messages (tag message or commit message if tag has no message):
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git tag -n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Update &amp;amp; Publish
&lt;/h2&gt;
&lt;h5&gt;
  
  
  List all current configured remotes:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Show information about a remote:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote show &amp;lt;remote&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Add new remote repository, named &amp;lt;remote&amp;gt;:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote add &amp;lt;remote&amp;gt; &amp;lt;url&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Rename a remote repository, from &amp;lt;remote&amp;gt; to &amp;lt;new_remote&amp;gt;:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rename &amp;lt;remote&amp;gt; &amp;lt;new_remote&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Remove a remote:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote rm &amp;lt;remote&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;em&gt;Note: git remote rm does not delete the remote repository from the server. It simply removes the remote and its references from your local repository.&lt;/em&gt;&lt;/p&gt;
&lt;h5&gt;
  
  
  Download all changes from &amp;lt;remote&amp;gt;, but don't integrate into HEAD:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git fetch &amp;lt;remote&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Download changes and directly merge/integrate into HEAD:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git remote pull &amp;lt;remote&amp;gt; &amp;lt;url&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Get all changes from HEAD to local repository:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git pull origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Get all changes from HEAD to local repository without a merge:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git pull --rebase &amp;lt;remote&amp;gt; &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Publish local changes on a remote:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git push remote &amp;lt;remote&amp;gt; &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Delete a branch on the remote:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git push &amp;lt;remote&amp;gt; :&amp;lt;branch&amp;gt; (since Git v1.5.0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;OR&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git push &amp;lt;remote&amp;gt; --delete &amp;lt;branch&amp;gt; (since Git v1.7.0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Publish your tags:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git push --tags
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h4&gt;
  
  
  Configure the merge tool globally to meld (editor)
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; merge.tool meld
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Use your configured merge tool to solve conflicts:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git mergetool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Merge &amp;amp; Rebase
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Merge branch into your current HEAD:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git merge &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  List merged branches
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git branch --merged
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Rebase your current HEAD onto &amp;lt;branch&amp;gt;:&lt;br&gt;
&lt;/h5&gt;

&lt;p&gt;&lt;em&gt;Don't rebase published commit!&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rebase &amp;lt;branch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Abort a rebase:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rebase --abort
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Continue a rebase after resolving conflicts:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rebase --continue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Use your editor to manually solve conflicts and (after resolving) mark file as resolved:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add &amp;lt;resolved-file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rm &amp;lt;resolved-file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Squashing commits:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rebase -i &amp;lt;commit-just-before-first&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now replace this,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pick &amp;lt;commit_id&amp;gt;
pick &amp;lt;commit_id2&amp;gt;
pick &amp;lt;commit_id3&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to this,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pick &amp;lt;commit_id&amp;gt;
squash &amp;lt;commit_id2&amp;gt;
squash &amp;lt;commit_id3&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Undo
&lt;/h2&gt;
&lt;h5&gt;
  
  
  Discard all local changes in your working directory:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reset --hard HEAD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Get all the files out of the staging area(i.e. undo the last &lt;code&gt;git add&lt;/code&gt;):
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reset HEAD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Discard local changes in a specific file:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git checkout HEAD &amp;lt;file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Revert a commit (by producing a new commit with contrary changes):
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git revert &amp;lt;commit&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Reset your HEAD pointer to a previous commit and discard all changes since then:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reset --hard &amp;lt;commit&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Reset your HEAD pointer to a remote branch current state.
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reset --hard &amp;lt;remote/branch&amp;gt; e.g., upstream/master, origin/my-feature
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Reset your HEAD pointer to a previous commit and preserve all changes as unstaged changes:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reset &amp;lt;commit&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Reset your HEAD pointer to a previous commit and preserve uncommitted local changes:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git reset --keep &amp;lt;commit&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Remove files that were accidentally committed before they were added to .gitignore
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git rm -r --cached .
$ git add .
$ git commit -m "remove xyz file"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Git-Flow
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Index
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;li&gt;Getting Started&lt;/li&gt;
&lt;li&gt;Features&lt;/li&gt;
&lt;li&gt;Make a Release&lt;/li&gt;
&lt;li&gt;Hotfixes&lt;/li&gt;
&lt;li&gt;Commands&lt;/li&gt;
&lt;/ul&gt;



&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;
&lt;h6&gt;
  
  
  You need a working git installation as prerequisite. Git flow works on OSX, Linux and Windows.
&lt;/h6&gt;
&lt;h5&gt;
  
  
  OSX Homebrew:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ brew install git-flow-avh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  OSX Macports:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ port install git-flow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Linux (Debian-based):
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt-get install git-flow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Windows (Cygwin):
&lt;/h5&gt;
&lt;h6&gt;
  
  
  You need wget and util-linux to install git-flow.
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-O&lt;/span&gt; - &lt;span class="nt"&gt;--no-check-certificate&lt;/span&gt; https://raw.githubusercontent.com/petervanderdoes/gitflow/develop/contrib/gitflow-installer.sh &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;state&amp;gt; | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;
&lt;h6&gt;
  
  
  Git flow needs to be initialized in order to customize your project setup. Start using git-flow by initializing it inside an existing git repository:
&lt;/h6&gt;
&lt;h5&gt;
  
  
  Initialize:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  You'll have to answer a few questions regarding the naming conventions for your branches. It's recommended to use the default values.
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git flow init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;OR&lt;/p&gt;
&lt;h6&gt;
  
  
  To use default
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git flow init &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;
&lt;h6&gt;
  
  
  Develop new features for upcoming releases. Typically exist in developers repos only.
&lt;/h6&gt;
&lt;h5&gt;
  
  
  Start a new feature:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  This action creates a new feature branch based on 'develop' and switches to it.
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow feature start MYFEATURE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Finish up a feature:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  Finish the development of a feature. This action performs the following:
&lt;/h6&gt;
&lt;h6&gt;
  
  
  1) Merged MYFEATURE into 'develop'.
&lt;/h6&gt;
&lt;h6&gt;
  
  
  2) Removes the feature branch.
&lt;/h6&gt;
&lt;h6&gt;
  
  
  3) Switches back to 'develop' branch
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow feature finish MYFEATURE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Publish a feature:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  Are you developing a feature in collaboration? Publish a feature to the remote server so it can be used by other users.
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow feature publish MYFEATURE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Getting a published feature:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  Get a feature published by another user.
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow feature pull origin MYFEATURE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Tracking a origin feature:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  You can track a feature on origin by using
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow feature track MYFEATURE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h3&gt;
  
  
  Make a Release
&lt;/h3&gt;
&lt;h6&gt;
  
  
  Support preparation of a new production release. Allow for minor bug fixes and preparing meta-data for a release
&lt;/h6&gt;
&lt;h5&gt;
  
  
  Start a release:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  To start a release, use the git flow release command. It creates a release branch created from the 'develop' branch. You can optionally supply a [BASE] commit sha-1 hash to start the release from. The commit must be on the 'develop' branch.
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow release start RELEASE [BASE]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h6&gt;
  
  
  It's wise to publish the release branch after creating it to allow release commits by other developers. Do it similar to feature publishing with the command:
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow release publish RELEASE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h6&gt;
  
  
  (You can track a remote release with the:
&lt;/h6&gt;

&lt;p&gt;&lt;br&gt;
 &lt;code&gt;git flow release track RELEASE&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 command)&lt;/p&gt;
&lt;h5&gt;
  
  
  Finish up a release:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  Finishing a release is one of the big steps in git branching. It performs several actions:
&lt;/h6&gt;
&lt;h6&gt;
  
  
  1) Merges the release branch back into 'master'
&lt;/h6&gt;
&lt;h6&gt;
  
  
  2) Tags the release with its name
&lt;/h6&gt;
&lt;h6&gt;
  
  
  3) Back-merges the release into 'develop'
&lt;/h6&gt;
&lt;h6&gt;
  
  
  4) Removes the release branch
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow release finish RELEASE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h6&gt;
  
  
  Don't forget to push your tags with
&lt;/h6&gt;

&lt;p&gt;&lt;br&gt;
 &lt;code&gt;git push --tags&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;



&lt;h3&gt;
  
  
  Hotfixes
&lt;/h3&gt;
&lt;h6&gt;
  
  
  Hotfixes arise from the necessity to act immediately upon an undesired state of a live production version. May be branched off from the corresponding tag on the master branch that marks the production version.
&lt;/h6&gt;
&lt;h5&gt;
  
  
  Git flow hotfix start:
&lt;/h5&gt;
&lt;h6&gt;
  
  
  Like the other git flow commands, a hotfix is started with
&lt;/h6&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git flow hotfix start VERSION [BASENAME]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h6&gt;
  
  
  The version argument hereby marks the new hotfix release name. Optionally you can specify a basename to start from.
&lt;/h6&gt;

&lt;h5&gt;
  
  
  Finish a hotfix:
&lt;/h5&gt;

&lt;h6&gt;
  
  
  By finishing a hotfix it gets merged back into develop and master. Additionally the master merge is tagged with the hotfix version
&lt;/h6&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git flow hotfix finish VERSION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;
  
  
  Commands
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--frNv4cDG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptpbqoxj8i82seieb8li.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--frNv4cDG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptpbqoxj8i82seieb8li.png" alt="Image description" width="620" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Git flow schema
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4iwsTflP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c14gwkctciyh8y2bvps1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4iwsTflP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c14gwkctciyh8y2bvps1.png" alt="Image description" width="880" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
