<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anudeep Rallapalli</title>
    <description>The latest articles on DEV Community by Anudeep Rallapalli (@mrgunneramz).</description>
    <link>https://dev.to/mrgunneramz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mrgunneramz"/>
    <language>en</language>
    <item>
      <title>Monitoring Linux EC2 Instance using Prometheus and Grafana</title>
      <dc:creator>Anudeep Rallapalli</dc:creator>
      <pubDate>Sat, 10 Apr 2021 19:27:59 +0000</pubDate>
      <link>https://dev.to/mrgunneramz/monitoring-limnux-ec2-instance-using-prometheus-and-grafana-2jj7</link>
      <guid>https://dev.to/mrgunneramz/monitoring-limnux-ec2-instance-using-prometheus-and-grafana-2jj7</guid>
      <description>&lt;p&gt;Grafana has become the world’s most popular technology used to compose observability dashboards with everything from Prometheus &amp;amp; Graphite metrics, to logs and application data to power plants and beehives.&lt;br&gt;
• Customers use its open-source analytics and interactive visualization web application.&lt;br&gt;
• It provides charts, graphs, and alerts for the web when connected to supported data sources.&lt;br&gt;
Today, let’s look at building a Grafana dashboard for an Amazon Linux EC2 Instance using Prometheus as Data Source.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vkuwh2fq7nes98agyql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vkuwh2fq7nes98agyql.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus uses Node_exporter to scrap data from EC2 instance and display them in Grafana dashboards.&lt;br&gt;
• Let’s launch an EC2 Linux Instance from the AWS Console. Make sure, that the security group is allowing all the traffic into the instance (Generally, opening the instance to all the traffic is not advisable for Production based systems).&lt;br&gt;
• SSH into the EC2 instance.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6pw95pppms86ohsyufw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6pw95pppms86ohsyufw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Now, let's install node_exporter onto the EC2 Instance. &lt;br&gt;
• Run the following command to change the directory to /opt/&lt;br&gt;
    cd /opt/&lt;br&gt;
• Now, run the following command to download the node_exporter binary file:&lt;br&gt;
    sudo wget &lt;a href="https://github.com/prometheus/node_ex" rel="noopener noreferrer"&gt;https://github.com/prometheus/node_ex&lt;/a&gt;                                                                                        porter/releases/download/v1.1.2/node_exporter-1.1.2.linux-amd64.tar.gz&lt;br&gt;
• Run the following command to extract the downloaded file:&lt;br&gt;
    sudo tar xf node_exporter-1.1.2.linux-amd64.tar.gz&lt;br&gt;
• Rename the file name “node_exporter-1.1.2.linux-amd64” to “node-exporter” for easy execution. &lt;br&gt;
Run the following command change the name:&lt;br&gt;
    sudo mv node_exporter-1.1.2.linux-amd64 node_ex                                                                                        porter&lt;br&gt;
• Let’s start the node-exporter agent on the server by running the following command:&lt;br&gt;
    cd node_exporter&lt;br&gt;
    sudo ./node_exporter&lt;br&gt;
• Go back to the EC2 Management Console and copy the Public IP address of the EC2 Instance.&lt;br&gt;
• Run the IP Address on your browser with the extension :9100&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41ecg4fmzu4ts3i047yk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41ecg4fmzu4ts3i047yk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
If you click on "Metrics" button. You would see all the metrics displayed on the screen.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mty4cl6iailpt6l7261.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mty4cl6iailpt6l7261.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we need to launch two instances from the console. One for Prometheus and the other for Grafana.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9e23jsjqadein7boqfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9e23jsjqadein7boqfu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• SSH into the Prometheus Instance.&lt;br&gt;
• We need to install the Prometheus agent into the EC2 Instance. Run the following command to install Prometheus.&lt;br&gt;
cd /opt/&lt;br&gt;
sudo wget &lt;a href="https://github.com/prometheus/prometheus/releases/download/v2.26.0/prometheus-2.26.0.linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/prometheus/prometheus/releases/download/v2.26.0/prometheus-2.26.0.linux-amd64.tar.gz&lt;/a&gt;&lt;br&gt;
• Let’s extract the downloaded file by running the following command:&lt;br&gt;
sudo tar xf prometheus-2.26.0.linux-amd64.tar.gz&lt;br&gt;
• We will be renaming the file “prometheus-2.26.0.linux-amd64” to “prometheus” for easy access.&lt;br&gt;
sudo mv prometheus-2.26.0.linux-amd64 prometheus&lt;br&gt;
• Change the directory to prometheus:&lt;br&gt;
cd prometheus/&lt;br&gt;
• Running the below command will open the Prometheus.yml file in a text editor.&lt;br&gt;
sudo vi prometheus.yml&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydfb2hchji9tlmzp0wf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydfb2hchji9tlmzp0wf8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• In the text editor, enter the value of the EC2 instance’s IP address at “target” under “static_configure” with extension :9100 and save the file.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foihpcan8x4mhcvtjop7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foihpcan8x4mhcvtjop7c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Enter the Public IP address of the Prometheus server with the extension :9090 in your browser.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fellay6d6tq63ftfxmc68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fellay6d6tq63ftfxmc68.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• You should be able to see the Prometheus Dashboard page.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4ntkzvclbi6c58i1797.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4ntkzvclbi6c58i1797.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Select “Targets” in dashboard.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkvbtd04lxc9mtx00e2l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkvbtd04lxc9mtx00e2l.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Its time to configure our final instance for Grafana.&lt;br&gt;
• SSH into the Grafana instance which we already created.&lt;br&gt;
• Run the below command in the SSH prompt to install Grafana binaries into the server. It will open a text editor&lt;br&gt;
sudo vi /etc/yum.repos.d/grafana.repo&lt;br&gt;
• Paste the below text in the editor to provide the configuration for the Grafana server. Save and close the text editor&lt;br&gt;
[grafana]&lt;br&gt;
name=grafana&lt;br&gt;
baseurl=&lt;a href="https://packages.grafana.com/oss/rpm" rel="noopener noreferrer"&gt;https://packages.grafana.com/oss/rpm&lt;/a&gt;&lt;br&gt;
repo_gpgcheck=1&lt;br&gt;
enabled=1&lt;br&gt;
gpgcheck=1&lt;br&gt;
gpgkey=&lt;a href="https://packages.grafana.com/gpg.key" rel="noopener noreferrer"&gt;https://packages.grafana.com/gpg.key&lt;/a&gt;&lt;br&gt;
sslverify=1&lt;br&gt;
sslcacert=/etc/pki/tls/certs/ca-bundle.crt&lt;br&gt;
• Time to install, start and enable Grafana. Run the below commands one after the other:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakl119f1dy6ql62k261f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakl119f1dy6ql62k261f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
sudo yum install Grafana&lt;br&gt;
sudo systemctl start grafana-server&lt;br&gt;
sudo systemctl enable grafana-server&lt;br&gt;
• Now, type the Public IP address of the Grafana instance in the browser with the extension :3000&lt;br&gt;
• Enter ‘admin’ as username and password.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttpc14jnjl2r1gwtx1k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttpc14jnjl2r1gwtx1k5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• You will be prompted to change the password. Enter the new password and confirm it.&lt;br&gt;
• Now, select Prometheus Private IP address as data source.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m3e73v5ufzq1t3l9xq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m3e73v5ufzq1t3l9xq1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Since both Grafana and Prometheus are available in the same network. We use private IP address for Prometheus.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5acn0ojkkn4m105n9z67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5acn0ojkkn4m105n9z67.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Select the Dashboard Template “Node_exporter”&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpop59yc3gzbohxmpnna4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpop59yc3gzbohxmpnna4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Finally, the Grafana Dashboard for EC2 linux instance using Prometheus is ready!&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96ls02ng3yc6otfg9nr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96ls02ng3yc6otfg9nr2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Importance of Security and Infrastructure Governance in AWS</title>
      <dc:creator>Anudeep Rallapalli</dc:creator>
      <pubDate>Sat, 10 Apr 2021 11:48:25 +0000</pubDate>
      <link>https://dev.to/mrgunneramz/importance-of-security-and-infrastructure-governance-in-aws-3mdl</link>
      <guid>https://dev.to/mrgunneramz/importance-of-security-and-infrastructure-governance-in-aws-3mdl</guid>
      <description>&lt;p&gt;Security is one of the biggest concerns in Cloud Computing. Every company whose application is running on either web or the mobile phone is always worried about the security of their infrastructure.&lt;br&gt;
1)  The leadership team at a large Online Store is worried that its employee might delete its critical data by mistake.&lt;br&gt;
2)  The CEO of a Social Media company is concerned about the security of its website images stored in the Cloud.&lt;br&gt;
3)  The DevOps team of a top Online Gaming Company wants to provide a secure and easy way for its users to login to their site and not worry about the leakage of these login credentials.&lt;br&gt;
4)  The Site Reliability team at an Online Media company is concerned about the reliability of its server whenever there is a viral news on their website.&lt;br&gt;
5)  The leadership team at a large IT firm is concerned that their employees might have more privileges than required.&lt;/p&gt;

&lt;p&gt;These are few of many security questions which gives all of us sleepless nights.&lt;br&gt;
It takes more than one Documentation to answer all of the security questions on Cloud. &lt;br&gt;
For now, Let's look at the above 5 scenarios and try to resolve these concerns - One Concern at a Time!&lt;/p&gt;

&lt;p&gt;1) The leadership team at a large Online Store is worried that its employee might delete its critical data by mistake.&lt;br&gt;
    Loosing crucial data due to human error is avoidable. We need to automate the infrastructure governance. In AWS, we have Identity and Access Management (IAM). &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bjengSok--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ayy2c0d7dbtdh29bg682.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bjengSok--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ayy2c0d7dbtdh29bg682.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• It is a framework of policies and technologies for ensuring that the proper people in an enterprise have the appropriate access to technology resources.&lt;br&gt;
• It helps the root user to securely have control access to AWS resources. &lt;br&gt;
• You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.&lt;br&gt;
• To learn more about IAM, click on the following link: &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) The CEO of a Social Media company is concerned about the security of its website images stored in the Cloud.&lt;br&gt;
    In AWS, S3(Simple Storage Service) and EBS (Elastic Block Storage) provide storage for front-end and Back-end files of the application. Encryption is possible for the files stored in S3 and EBS. Let’s see how!&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h1z-M_eu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6novet5mx7r7sfcjs5yl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h1z-M_eu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6novet5mx7r7sfcjs5yl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
S3 (Simple Storage Service):&lt;br&gt;
• S3 is an object-based storage system which stores objects in buckets.&lt;br&gt;
• It provides unlimited object storage where each object can be 0B to 5TB in size.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jHZG_ejr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r4yriaejwssbsd7cjbde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jHZG_ejr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r4yriaejwssbsd7cjbde.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Security and Encryption in S3:&lt;br&gt;
• By default, all newly created buckets are PRIVATE.&lt;br&gt;
• Access Control can be set by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Bucket Policy – Bucket Level Encryption.&lt;/li&gt;
&lt;li&gt; Access Control Lists – Object Level Encryption.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• S3 buckets can be configured to create access logs.&lt;br&gt;
• It logs all requests made to S3 buckets.&lt;br&gt;
• These logs can be sent to another bucket or even to a bucket on another account.&lt;/p&gt;

&lt;p&gt;Encryption in Transit:&lt;br&gt;
• Traffic flows in S3 following HTTPS Protocol.&lt;br&gt;
• Any traffic flowing through HTTPS will be encrypted by SSL/TLS.&lt;/p&gt;

&lt;p&gt;Encryption at Rest:&lt;br&gt;
Encryption for data at rest can be done using 3 different keys.&lt;br&gt;
• S3 Managed Keys – SSE-S3&lt;br&gt;
• AWS KMS Managed Keys – SSE-KMS&lt;br&gt;
• Server-Side Encryption with customer provided keys – SSE-C&lt;br&gt;
• Client-Side Encryption – Files are encrypted at the client’s end before sent to S3&lt;/p&gt;

&lt;p&gt;For more information about S3 and its Security. Click on the following link: &lt;a href="https://aws.amazon.com/s3/security/#:%7E:text=Amazon%20S3%20offers%20flexible%20security,Private%20Cloud%20(Amazon%20VPC)"&gt;https://aws.amazon.com/s3/security/#:~:text=Amazon%20S3%20offers%20flexible%20security,Private%20Cloud%20(Amazon%20VPC)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;EBS-Elastic Block Store:&lt;br&gt;
• It provides persistent block storage volume for use with Amazon EC2 instance in AWS Cloud.&lt;br&gt;
• Each EBS volume is automatically replicated within its AZ to protect you from component failure, offering high availability and Durability.&lt;br&gt;
• Snapshots can be taken for each of the EBS store volumes. They exist on S3.&lt;br&gt;
• Snapshots are point in time copies of volumes. Think of snapshot as photograph of disks.&lt;br&gt;
• Snapshots are incremental – this means that only the blocks that are changed since last snapshot are moved to S3.&lt;br&gt;
• AMIs can also be created from snapshots.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZiE0MyMS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46goioolttr4cyesdpuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZiE0MyMS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46goioolttr4cyesdpuh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Encryption of EBS Volumes:&lt;br&gt;
Encryption of EBS volumes can be done in two ways.&lt;br&gt;
• EBS volumes can be encrypted during its creation.&lt;br&gt;
• They can also be encrypted after they are created as well by following the below steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a snapshot of unencrypted root device.&lt;/li&gt;
&lt;li&gt; Create a copy of the snapshot and select the encrypt option.&lt;/li&gt;
&lt;li&gt; Create an AMI of the encrypted snapshot.&lt;/li&gt;
&lt;li&gt; Use AMI to launch new encrypted instances.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For more information about EBS Encryption. Check the following link: &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html"&gt;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) The DevOps team of a top Online Gaming Company wants to provide a secure and easy way for its users to login to their site and not worry about the leakage of these login credentials.&lt;br&gt;
    Every customer wants their login credentials to be very confidential and discreet. At the same time, they also expect the login process to be simple and smooth as well.&lt;br&gt;
AWS provides Web Identity Federation to meet the above demand:&lt;br&gt;
• It allows users to authenticate with the web identity provider (Google, Facebook, Amazon).&lt;br&gt;
• The user authenticates first with the web ID provider and receives an authentication token, which is exchanged for temporary AWS credentials allowing them to assume the IAM role.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qr3bk85o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k7matyaz2rvg5wn38on1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qr3bk85o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k7matyaz2rvg5wn38on1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Cognito is an identity broker which handles interaction between your applications and the web ID provider.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It provides sign up, sign in &amp;amp; guest user access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Syncs user data for a seamless experience across your devices.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• Cognito is the AWS recommended approach for web ID federation particularly for mobile apps.&lt;br&gt;
For more information about Cognito. Click on the following link: &lt;a href="https://docs.aws.amazon.com/cognito/"&gt;https://docs.aws.amazon.com/cognito/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) The Site Reliability team at an Online Media company is concerned about the reliability of its server whenever there is a viral news on their website.&lt;br&gt;
    Automating the monitoring service for 24/7 is one of the main advantages of having Cloud Infrastructure. AWS provides CloudWatch and CloudTrail for monitoring the infrastructure.&lt;/p&gt;

&lt;p&gt;CloudWatch:&lt;br&gt;
• It is a monitoring service which monitors your AWS resources as well as applications running on AWS.&lt;br&gt;
• CloudWatch can monitor:&lt;/p&gt;

&lt;p&gt;1)  Compute services such as:&lt;br&gt;
a.  EC2 Instances&lt;br&gt;
b.  Auto Scaling Groups&lt;br&gt;
c.  Elastic Load Balancers&lt;br&gt;
d.  Route 53 Health Checks&lt;/p&gt;

&lt;p&gt;2)  Storage &amp;amp; Content Delivery:&lt;br&gt;
a.  EBS Volumes&lt;br&gt;
b.  Storage Gateway&lt;br&gt;
c.  Cloudfront&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wdXMY36b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/codw0cg8xmzi541iyrtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wdXMY36b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/codw0cg8xmzi541iyrtt.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Cloudwatch monitoring in EC2:&lt;br&gt;
• Cloudwatch monitors the following metrics in EC2:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; CPU&lt;/li&gt;
&lt;li&gt; Network&lt;/li&gt;
&lt;li&gt; Disk&lt;/li&gt;
&lt;li&gt; Status Check&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;• RAM Utilization is a custom metric.&lt;br&gt;
• CloudWatch with EC2 can monitor events every 5 minutes by default.&lt;br&gt;
• You can have 1 minute interval by turning on detailed monitoring.&lt;br&gt;
• You can create CloudWatch alarms which triggers notifications.&lt;br&gt;
• You can retrieve data from terminated EC2 or ELB instances after its termination.&lt;br&gt;
• CloudWatch logs by default are stored indefinitely.&lt;br&gt;
• CloudWatch services are not restricted to just AWS resources. They can be used on premise as well.&lt;br&gt;
• The user just needs to download SSM agent and CloudWatch agent on the server.&lt;br&gt;
To know more about CloudWatch. Follow the link: &lt;a href="https://docs.aws.amazon.com/cloudwatch/index.html"&gt;https://docs.aws.amazon.com/cloudwatch/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudTrail:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KEfYdwO0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9sl65xlc730lnyv7dl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KEfYdwO0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9sl65xlc730lnyv7dl6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• It increases visibility into your user and resource activity by recording AWS management console actions and API calls.&lt;br&gt;
• You can identify which user and accounts called AWS, the source IP address from which call was made and when the call occurred.&lt;br&gt;
For more about CloudTrail. Click on the link: &lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html"&gt;https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html&lt;/a&gt;&lt;br&gt;
CloudWatch is all about performance and CloudTrail is all about auditing.&lt;/p&gt;

&lt;p&gt;5) The leadership team at a large IT firm is concerned that their employees might have more privileges than required.&lt;br&gt;
    It is very important especially in large companies that its employees always have least privileges towards AWS resources. The leadership team can use AWS Organizations for such instances in AWS cloud infrastructure.&lt;/p&gt;

&lt;p&gt;AWS Organizations:&lt;br&gt;
• It centrally manages policies across multiple AWS accounts.&lt;br&gt;
• It controls access to AWS services within these accounts using Service Control Policies (SCP).&lt;br&gt;
• It also automates AWS account creation and management.&lt;br&gt;
• It consolidates billing across multiple AWS Accounts.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0cDWF0NO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tq671vxpfkqvae02b84z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0cDWF0NO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tq671vxpfkqvae02b84z.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Tagging and Resource Group:&lt;br&gt;
• Tagging every AWS service is very important.&lt;br&gt;
• Resource Groups are a way of grouping tags.&lt;br&gt;
• You can use resource groups with AWS systems manager to automate tasks.&lt;/p&gt;

&lt;p&gt;AWS Cost Explorer &amp;amp; Cost Allocation Tags:&lt;br&gt;
• Cost Explorer is a tool that enables you to view and analyze your costs and usage.&lt;br&gt;
• Use tags to tag your resources.&lt;br&gt;
• Configure tags for cost centers (i.e., by department, Emp ID etc.)&lt;br&gt;
• Activate cost allocation tags to track your cost by tags.&lt;br&gt;
For more information about AWS Organizations. Click the following link: &lt;a href="https://docs.aws.amazon.com/organizations/"&gt;https://docs.aws.amazon.com/organizations/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was about Security and few of many AWS services for security and infrastructure governance in AWS.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Build a Modern Web Application</title>
      <dc:creator>Anudeep Rallapalli</dc:creator>
      <pubDate>Sat, 03 Apr 2021 04:54:21 +0000</pubDate>
      <link>https://dev.to/mrgunneramz/build-a-modern-web-application-5ed9</link>
      <guid>https://dev.to/mrgunneramz/build-a-modern-web-application-5ed9</guid>
      <description>&lt;p&gt;Q) What is a Modern Application?&lt;br&gt;
A) •  Modern applications isolate the business logic, optimizes reuse and iteration, it removes the overhead everywhere possible.&lt;br&gt;
• In plain words, Modern apps are built using services that enable you to focus on writing code while automating infrastructure maintenance tasks.&lt;/p&gt;

&lt;p&gt;• Let's look at building one such modern web application using AWS services.&lt;br&gt;
Technologies used:&lt;br&gt;
 AWS Cloud9&lt;br&gt;
 Amazon Simple Storage Service (S3)&lt;br&gt;
 AWS Fargate&lt;br&gt;
 AWS CloudFormation&lt;br&gt;
 AWS Identity and Access Management (IAM)&lt;br&gt;
 Amazon Virtual Private Cloud (VPC)&lt;br&gt;
 Amazon Elastic Load Balancing&lt;br&gt;
 Amazon Elastic Container Service (ECS)&lt;br&gt;
 AWS Elastic Container Registry (ECR)&lt;br&gt;
 AWS CodeCommit&lt;br&gt;
 AWS CodePipeline&lt;br&gt;
 AWS CodeDeploy&lt;br&gt;
 AWS CodeBuild&lt;br&gt;
 Amazon DynamoDB&lt;br&gt;
 Amazon Cognito&lt;br&gt;
 Amazon API Gateway&lt;br&gt;
 AWS Kinesis Firehose&lt;br&gt;
 Amazon S3&lt;br&gt;
 AWS Lambda&lt;br&gt;
 AWS Serverless Application Model (AWS SAM)&lt;br&gt;
 AWS SAM Command Line Interface (SAM CLI)&lt;/p&gt;

&lt;p&gt;• We will build a sample website called Mythical Mysfits that enables users to adopt a fantasy creature (mysfit) as pet. Here is the working sample of this website at: &lt;a href="http://www.mythicalmysfits.com" rel="noopener noreferrer"&gt;www.mythicalmysfits.com&lt;/a&gt;&lt;br&gt;
• We will be walking through the steps to create an architected web application. &lt;br&gt;
• We will be hosting this web application on a front-end web server and connect it to a backend database. &lt;br&gt;
• We also will be setting up user authentication and will be able to collect and analyze user behavior.&lt;br&gt;
• The site also provides basic functionality such as ability to “like” your favorite mysfit and reserve your chosen mysfit for adoption. &lt;br&gt;
• It also allows you to gather insights about user behavior for future analysis.&lt;/p&gt;

&lt;p&gt;The below application architecture diagram provides a structural representation of the services that make up Mythical Mysfits and how these services interact with each other.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi6r9e8awuid2tl5xkh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi6r9e8awuid2tl5xkh5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Let's begin by hosting the static content (html, js, css, media content, etc.) of our Mythical Mysfit website on Amazon S3 (Simple Storage Service). &lt;br&gt;
S3 is a highly durable, highly available, and inexpensive object storage service that can serve stored objects directly via HTTP. This makes it wonderfully useful for serving static web content directly to web browsers for sites on the Internet.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxx8l1gar9hz0hewdho33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxx8l1gar9hz0hewdho33.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Sign into your AWS console and select the region of your choice.&lt;br&gt;
• Navigate to the Cloud9 page and create an environment with default settings.&lt;br&gt;
• We will be completing all the development tasks required within our own browser using the cloud-based IDE, AWS Cloud9.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nd31gvkyehtx900zhhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nd31gvkyehtx900zhhp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Use the below git command in the terminal to clone the necessary code to complete this tutorial:&lt;br&gt;
    git clone -b python &lt;a href="https://github.com/aws-samples/aws-modern-application-workshop.git" rel="noopener noreferrer"&gt;https://github.com/aws-samples/aws-modern-application-workshop.git&lt;/a&gt;&lt;br&gt;
Please, note that the above repository is a property of AWS and they own its entire copyrights.&lt;br&gt;
Now, change directory to the newly cloned repository directory:&lt;br&gt;
    cd aws-modern-application-workshop&lt;/p&gt;

&lt;p&gt;We are all set to create the infrastructure components needed for hosting a static website in Amazon S3 via the AWS CLI.&lt;br&gt;
• Run the below command in the terminal to create the S3 bucket with the name of your choice.&lt;br&gt;
    aws s3 mb s3://REPLACE_ME_BUCKET_NAME&lt;br&gt;
• We will be setting some configuration options that enable the bucket to be used for static website hosting. &lt;br&gt;
• Run the below command in your terminal to make your S3 bucket ready for static website hosting:&lt;br&gt;
    aws s3 website s3://REPLACE_ME_BUCKET_NAME --index-document index.html&lt;br&gt;
• By default, the public access will be blocked for the S3 bucket for security reasons.&lt;br&gt;
• We need to enable the public access as this bucket is static hosting by changing the Bucket policy.&lt;br&gt;
• Bucket Policy is a JSON file located at ~/environment/aws-modern-application-workshop/module-1/aws-cli/website-bucket-policy.json.&lt;br&gt;
• Open the file and make the necessary changes such as entering your bucket name.&lt;br&gt;
• Now, Execute the following CLI command to add a public bucket policy to your website:&lt;br&gt;
    aws s3api put-bucket-policy --bucket REPLACE_ME_BUCKET_NAME --policy file://~/environment/aws-modern-application-workshop/module-1/aws-cli/website-bucket-policy.json&lt;br&gt;
• Now that our new website bucket is configured appropriately, let's add the first iteration of the Mythical Mysfits homepage to the bucket by running the following S3 CLI command&lt;br&gt;
    aws s3 cp ~/environment/aws-modern-application-workshop/module-1/web/index.html s3://REPLACE_ME_BUCKET_NAME/index.html &lt;br&gt;
• Our S3 bucket is up and hosting.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmb93mr3t2m1rh5bk111.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmb93mr3t2m1rh5bk111.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Open your favorite browser and run the below command with your inputs.&lt;br&gt;
    &lt;a href="http://REPLACE_ME_BUCKET_NAME.s3-website-REPLACE_ME_YOUR_REGION.amazonaws.com" rel="noopener noreferrer"&gt;http://REPLACE_ME_BUCKET_NAME.s3-website-REPLACE_ME_YOUR_REGION.amazonaws.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;• Congratulations, you have created the basic static Mythical Mysfits Website!&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flptkqnefxlskfdpei8jy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flptkqnefxlskfdpei8jy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Our Mythical Mysfits website needs to integrate with an application backend. So, we will create a new microservice hosted using AWS Fargate.&lt;br&gt;
• We need to create the core infrastructure environment that the service will use, including the networking infrastructure in Amazon VPC, and the AWS IAM Roles.&lt;br&gt;
• To accomplish this, we will be using AWS Cloudformation.&lt;br&gt;
• It is available in the location: /module-2/cfn/core.yml&lt;br&gt;
• Run the below command in the Cloud9 terminal (might take upto 10 minutes for stack to be created):&lt;br&gt;
    aws cloudformation create-stack --stack-name MythicalMysfitsCoreStack --capabilities CAPABILITY_NAMED_IAM --template-body file://~/environment/aws-modern-application-workshop/module-2/cfn/core.yml&lt;br&gt;
Navigate to the Cloudformation page in the console and wait for the Status to change to "CREATE_COMPLETE"&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs43kmfppy9cjkgv8drgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs43kmfppy9cjkgv8drgh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
The output of this stack will be used a lot later. Hence, we will be saving them in a file named cloudformation-core-output.json by running the below command.&lt;br&gt;
    aws cloudformation describe-stacks --stack-name MythicalMysfitsCoreStack &amp;gt; ~/environment/cloudformation-core-output.json&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziir6j9wf204cyu8ibtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziir6j9wf204cyu8ibtv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Now, we will be creating a Docker container image that contains all of the code and configuration required to run the Mythical Mysfits backend as a microservice API created with Flask. &lt;br&gt;
We will build the Docker container image within Cloud9 and then push it to the Amazon ECR, where it will be available to pull when we create our service using Fargate.&lt;/p&gt;

&lt;p&gt;All of the code required to run our service backend is stored within the /module-2/app/ directory of the repository you've cloned into your Cloud9 IDE.&lt;br&gt;
Docker comes already installed on the Cloud9 IDE that you've created, so in order to build the Docker image locally, all we need to do is run the following two commands in the Cloud9 terminal:&lt;br&gt;&lt;br&gt;
    cd ~/environment/aws-modern-application-workshop/module-2/app&lt;br&gt;
    docker build . -t REPLACE_ME_AWS_ACCOUNT_ID.dkr.ecr.REPLACE_ME_REGION.amazonaws.com/mythicalmysfits/service:latest&lt;br&gt;
You can find the account ID from the below file:&lt;br&gt;
    /cloudformation-core-output.json&lt;br&gt;&lt;br&gt;
Once docker downloads all the necessary dependancy packages, copy the image tag from the output. We will be using it in the future.&lt;br&gt;
It will be in the format: 111111111111.dkr.ecr.us-east-1.amazonaws.com/mythicalmysfits/service:latest&lt;/p&gt;

&lt;p&gt;Let's test our image locally within Cloud9 to make sure everything is operating as expected.&lt;br&gt;
Run the following command to deploy the container “locally” (which is actually within your Cloud9 IDE inside AWS):&lt;br&gt;
    docker run -p 8080:8080 REPLACE_ME_WITH_DOCKER_IMAGE_TAG&lt;br&gt;
Enter the image tag which you have saved earlier.&lt;br&gt;
We will see the below output, if the container is up and running.&lt;br&gt;
     * Running on &lt;a href="http://0.0.0.0:8080/" rel="noopener noreferrer"&gt;http://0.0.0.0:8080/&lt;/a&gt; (Press CTRL+C to quit)&lt;br&gt;
To test our service with a local request, we're going to open up the built-in web browser within the Cloud9 IDE that can be used to preview applications that are running on the IDE instance.&lt;br&gt;
To open the preview web browser, select Preview &amp;gt; Preview Running Application in the Cloud9 menu bar:&lt;br&gt;
Append /mysfits to the end of the URI in the address bar of the preview browser and hit ENTER:&lt;br&gt;
If successful you will see a response from the service that returns the JSON document stored at &lt;code&gt;/aws-modern-application-workshop/module-2/app/service/mysfits-response.json&lt;/code&gt;&lt;br&gt;
With a successful test of our service locally, we're ready to create a container image repository in Amazon Elastic Container Registry (Amazon ECR) and push our image into it. &lt;br&gt;
In order to create the registry, run the following command:&lt;br&gt;
    aws ecr create-repository --repository-name mythicalmysfits/service&lt;br&gt;
In order to push container images into our new repository, we will need the authentication credentials for our Docker client to the repository.&lt;br&gt;
Run the following command, which will return a login command to retrieve credentials for our Docker client and then automatically execute it:&lt;br&gt;
    $(aws ecr get-login --no-include-email)&lt;br&gt;
'Login Succeeded' will be reported if the command is successful.&lt;br&gt;
Using the below command for the Docker to push the image and all the images it depends on to Amazon ECR:&lt;br&gt;
    docker push REPLACE_ME_WITH_DOCKER_IMAGE_TAG&lt;br&gt;
Now, Run the following command to see your newly pushed Docker image stored inside the ECR repository:&lt;br&gt;
    aws ecr describe-images --repository-name mythicalmysfits/service&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x75uwf1c7b22hrp14de.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x75uwf1c7b22hrp14de.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Now, we have an image available in ECR that we can deploy to a service hosted on Amazon ECS using AWS Fargate. So that, the website will be publicly available behind a Network Load Balancer.&lt;br&gt;
First, we will create a cluster in the Amazon ECS.&lt;br&gt;
To create a new cluster in ECS, run the following command:&lt;br&gt;
    aws ecs create-cluster --cluster-name MythicalMysfits-Cluster&lt;br&gt;
Next, we will create a new log group in AWS CloudWatch Logs for log collection and analysis.&lt;br&gt;
To create the new log group in CloudWatch logs, run the following command:&lt;br&gt;
    aws logs create-log-group --log-group-name mythicalmysfits-logs&lt;br&gt;
Since, we have a cluster created and a log group defined for where our container logs will be pushed to, we're ready to register an ECS task definition.&lt;br&gt;
The input to the CLI command will be available in the below JSON File:&lt;br&gt;
    ~/environment/aws-modern-application-workshop/module-2/aws-cli/task-definition.json&lt;br&gt;
Replace the values from the image tag and from /cloudformation-core-output.json&lt;br&gt;
Once you have replaced the values in task-defintion.json and saved it. Execute the following command:&lt;br&gt;
    aws ecs register-task-definition --cli-input-json file://~/environment/aws-modern-application-workshop/module-2/aws-cli/task-definition.json&lt;br&gt;
Rather than directly exposing our service to the Internet, we will provision NLB to sit in front of our service tier.&lt;br&gt;
To provision a new NLB, execute the following CLI command:&lt;br&gt;
    aws elbv2 create-load-balancer --name mysfits-nlb --scheme internet-facing --type network --subnets REPLACE_ME_PUBLIC_SUBNET_ONE REPLACE_ME_PUBLIC_SUBNET_TWO &amp;gt; ~/environment/nlb-output.json&lt;br&gt;
Retrieve the subnetIds from /cloudformation-core-output.json&lt;br&gt;
A new file will be created in your IDE called nlb-output.json. &lt;br&gt;
We will be using the outputs in the file in later steps.&lt;br&gt;
We need to create target groups for the NLB. Run the below command:&lt;br&gt;
    aws elbv2 create-target-group --name MythicalMysfits-TargetGroup --port 8080 --protocol TCP --target-type ip --vpc-id REPLACE_ME_VPC_ID --health-check-interval-seconds 10 --health-check-path / --health-check-protocol HTTP --healthy-threshold-count 3 --unhealthy-threshold-count 3 &amp;gt; ~/environment/target-group-output.json&lt;br&gt;
You can find the VPC_ID value in /cloudformation-core-output.json&lt;br&gt;
The output will be saved to target-group-output.json in your IDE. We will reference the output of this file in a subsequent step.&lt;br&gt;
Now, we need to create a Load Balancer Listener for the NLB. Run the following command:&lt;br&gt;
    aws elbv2 create-listener --default-actions TargetGroupArn=REPLACE_ME_NLB_TARGET_GROUP_ARN,Type=forward --load-balancer-arn REPLACE_ME_NLB_ARN --port 80 --protocol TCP&lt;br&gt;
You can find the TargetGroup and NLB ARN in the files we saved earlier while creating NLB and TargetGroup.&lt;br&gt;
We need to create an IAM role that grants the ECS service itself permissions to make ECS API requests within your account.&lt;br&gt;
To create the role, execute the following command in the terminal:&lt;br&gt;
    aws iam create-service-linked-role --aws-service-name ecs.amazonaws.com&lt;br&gt;
If the above returns an error about the role existing already, you can ignore it, as it would indicate the role has automatically been created in your account in the past.&lt;br&gt;
With the NLB created and configured, and the ECS service granted appropriate permissions, we're ready to create the actual ECS service.&lt;br&gt;
Open ~/environment/aws-modern-application-workshop/module-2/aws-cli/service-definition.json in the IDE and replace the indicated values of REPLACE_ME.&lt;br&gt;
Save it, then run the following command to create the service:&lt;br&gt;
    aws ecs create-service --cli-input-json file://~/environment/aws-modern-application-workshop/module-2/aws-cli/service-definition.json&lt;br&gt;
Copy the DNS name you saved when creating the NLB and send a request to it using your favorite browser.&lt;br&gt;
The DNS name will be in the below format:&lt;br&gt;
    &lt;a href="http://mysfits-nlb-123456789-abc123456.elb.us-east-1.amazonaws.com/mysfits" rel="noopener noreferrer"&gt;http://mysfits-nlb-123456789-abc123456.elb.us-east-1.amazonaws.com/mysfits&lt;/a&gt;&lt;br&gt;
You will receive a response in JSON format.&lt;br&gt;
Next, we need to integrate our website with your new API backend instead of using the hard coded data that we previously uploaded to S3.&lt;br&gt;
Open the below file and update the NLB URL.&lt;br&gt;
    /module-2/web/index.html&lt;br&gt;
Run the following command to upload this file to your S3 hosted website&lt;br&gt;
    aws s3 cp ~/environment/aws-modern-application-workshop/module-2/web/index.html s3://INSERT-YOUR-BUCKET-NAME/index.html&lt;br&gt;
Make sure, you update the Bucket name in the command before execution.&lt;br&gt;
Open the website using the S3 link. Now, it is retrieving JSON data from your Flask API running within a Docker container deployed to AWS Fargate.&lt;/p&gt;

&lt;p&gt;Now that we have a service up and running, we may think of code changes that we make to our Flask service. It would be a bottleneck for your development speed if you had to go through all of the same steps above every time you wanted to deploy a new feature to your service. &lt;br&gt;
That's where we need CI/CD.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8v77smpluk9gnrly6hz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8v77smpluk9gnrly6hz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Run the following command to create another S3 bucket that will be used to store the temporary artifacts that are created in the middle of our CI/CD pipeline executions:&lt;br&gt;
    aws s3 mb s3://REPLACE_ME_CHOOSE_ARTIFACTS_BUCKET_NAME&lt;br&gt;
Unlike our website bucket that allowed access to anyone, only our CI/CD pipeline should have access to this bucket.&lt;br&gt;
Open the following JSON file and make the relevant changes according to your services from /cloudformation-core-output.json.&lt;br&gt;
Once you've modified and saved this file, execute the following command:&lt;br&gt;
    aws s3api put-bucket-policy --bucket REPLACE_ME_ARTIFACTS_BUCKET_NAME --policy file://~/environment/aws-modern-application-workshop/module-2/aws-cli/artifacts-bucket-policy.json&lt;br&gt;
We need to create a CodeCommit repository to push and store the code. Run the following command:&lt;br&gt;
    aws codecommit create-repository --repository-name MythicalMysfitsService-Repository&lt;br&gt;
With a repository to store our code in, and an S3 bucket that will be used for our CI/CD artifacts, lets add to the CI/CD stack with a way for a service build to occur using AWS CodeBuild Project.&lt;br&gt;
Replace the values in the below file and save it.&lt;br&gt;
    ~/environment/aws-modern-application-workshop/module-2/aws-cli/code-build-project.json&lt;br&gt;
Now, execute the following command to create a project:&lt;br&gt;
    aws codebuild create-project --cli-input-json file://~/environment/aws-modern-application-workshop/module-2/aws-cli/code-build-project.json&lt;br&gt;
 We will use CodePipeline to continuously integrate our CodeCommit repository with our CodeBuild project so that builds will automatically occur whenever a code change is pushed to the repository.&lt;br&gt;
Open the following file, make the changes and save it.&lt;br&gt;
    ~/environment/aws-modern-application-workshop/module-2/aws-cli/code-pipeline.json&lt;br&gt;
Use the following command to create a pipeline in CodePipeline:&lt;br&gt;
    aws codepipeline create-pipeline --cli-input-json file://~/environment/aws-modern-application-workshop/module-2/aws-cli/code-pipeline.json&lt;br&gt;
We need to give CodeBuild permission to push container images into ECR.&lt;br&gt;
Locate the below file and make changes:&lt;br&gt;
    ~/environment/aws-modern-application-workshop/module-2/aws-cli/ecr-policy.json&lt;br&gt;
Save this file and then run the following command to create the policy:&lt;br&gt;
    aws ecr set-repository-policy --repository-name mythicalmysfits/service --policy-text file://~/environment/aws-modern-application-workshop/module-2/aws-cli/ecr-policy.json&lt;br&gt;
Now, we will be moving all of the website data into DynamoDB to make the websites future more extensible and scalable.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6ye6b7x6h4gpqid7uzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6ye6b7x6h4gpqid7uzh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
We will be creating a DynaomDB table with a primary index and two secondary indexes&lt;br&gt;
These indexes will help us to use the filter option which is used to filter the characters in the website based on their profile.&lt;br&gt;&lt;br&gt;
Run the below command:&lt;br&gt;
     aws dynamodb create-table --cli-input-json file://~/environment/aws-modern-application-workshop/module-3/aws-cli/dynamodb-table.json&lt;br&gt;
A DynamoDB Table is created. We can view the details of the table from the below command:&lt;br&gt;
    aws dynamodb describe-table --table-name MysfitsTable&lt;br&gt;
Following table will retrieve all of the items stored in the table (you'll see that this table is empty):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws dynamodb scan --table-name MysfitsTable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will be using the DynamoDB API BatchWriteItem command to batch insert a number of website's items into the table:&lt;br&gt;
    aws dynamodb batch-write-item --request-items file://~/environment/aws-modern-application-workshop/module-3/aws-cli/populate-dynamodb.json&lt;br&gt;
Now, if you run the below command. You will see that there are items loaded into the table.&lt;br&gt;
    aws dynamodb scan --table-name MysfitsTable&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8t6eag677g61u7r25zd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8t6eag677g61u7r25zd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Since, our data is in the table, we need to modify our application code to read from this table instead of returning the static JSON file.&lt;br&gt;
To copy the new files into your CodeCommit repository directory, execute the following command in the terminal:&lt;br&gt;
    cp ~/environment/aws-modern-application-workshop/module-3/app/service/* ~/environment/MythicalMysfitsService-Repository/service/&lt;br&gt;
Now, we need to check in these code changes to CodeCommit using the git command line client.&lt;br&gt;
Run the following commands to check in the new code changes and kick of your CI/CD pipeline:&lt;br&gt;
    cd ~/environment/MythicalMysfitsService-Repository&lt;br&gt;
    git add .&lt;br&gt;
    git commit -m "Add new integration to DynamoDB."&lt;br&gt;
    git push&lt;br&gt;
Now, in just 5-10 minutes you'll see your code changes make it through your full CI/CD pipeline in CodePipeline and out to your deployed Flask service to AWS Fargate on Amazon ECS.&lt;br&gt;
Feel free to explore the AWS CodePipeline console to see the changes progress through your pipeline.&lt;br&gt;
Finally, we need to publish a new index.html page to our S3 bucket.&lt;br&gt;
You can find the file in the below location. Make the necessary changes and save the file.&lt;br&gt;
    ~/environment/aws-modern-application-workshop/module-3/web/index.html.&lt;br&gt;
Run the following command to upload the new index.html file.&lt;br&gt;
    aws s3 cp --recursive ~/environment/aws-modern-application-workshop/module-3/web/ s3://your_bucket_name_here/&lt;br&gt;
If you revisit your website. We can find the filter option which means that the data is loading from DynamoDB.&lt;br&gt;
Now, we will create a User Pool in AWS Cognito, to enable registration and authentication of website users.&lt;br&gt;
Then, to make sure that only registered users are authorized to make changes in the website, we will deploy a REST API with Amazon API Gateway to sit in front of our NLB.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8gj5r0ev4ozuniqi1zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8gj5r0ev4ozuniqi1zj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Execute the following CLI command to create a user pool named MysfitsUserPool&lt;br&gt;
    aws cognito-idp create-user-pool --pool-name MysfitsUserPool --auto-verified-attributes email&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5jx1wi7y5dxbibpfzk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5jx1wi7y5dxbibpfzk0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Copy the user pool unique ID from the response of the above command.&lt;br&gt;
Next, in order to integrate our frontend website with Cognito, we must create a new User Pool Client for this user pool.&lt;br&gt;
Run the following command (replacing the --user-pool-id value with the one you copied above):&lt;br&gt;
    aws cognito-idp create-user-pool-client --user-pool-id REPLACE_ME --client-name MysfitsUserPoolClient&lt;br&gt;
Now, let's turn our attention to creating a new RESTful API in front of our existing Flask service.&lt;br&gt;
We need to configure an API Gateway VPC Link that enables API Gateway APIs to directly integrate with backend web services that are privately hosted inside a VPC using the following CLI command:&lt;br&gt;
     aws apigateway create-vpc-link --name MysfitsApiVpcLink --target-arns REPLACE_ME_NLB_ARN &amp;gt; ~/environment/api-gateway-link-output.json&lt;br&gt;
It will take about 5-10 minutes for the VPC link to be created.&lt;br&gt;
A file called api-gateway-link-output.json that contains the id for the VPC Link that is being created.&lt;br&gt;
You can copy the id from this file and proceed on to the next step.&lt;br&gt;
The REST API is defined using &lt;strong&gt;Swagger&lt;/strong&gt;. This Swagger definition of the API is located at &lt;code&gt;~/environment/aws-modern-applicaiton-workshop/module-4/aws-cli/api-swagger.json&lt;/code&gt;.&lt;br&gt;
Open the above location and make the necessary changes for the AWS services.&lt;br&gt;
Once the edits have been made, save the file and execute the following AWS CLI command:&lt;br&gt;&lt;br&gt;
     aws apigateway import-rest-api --parameters endpointConfigurationTypes=REGIONAL --body file://~/environment/aws-modern-application-workshop/module-4/aws-cli/api-swagger.json --fail-on-warnings&lt;br&gt;
Copy the response this command returns and save the &lt;code&gt;id&lt;/code&gt; value for the next step:&lt;br&gt;
Now, our API has been created, but it's yet to be deployed anywhere.&lt;br&gt;
We will call our stage &lt;code&gt;prod&lt;/code&gt;. To create a deployment for the prod stage, execute the following CLI command:&lt;br&gt;
    aws apigateway create-deployment --rest-api-id REPLACE_ME_WITH_API_ID --stage-name prod&lt;br&gt;
Now, the API is available in the internet. Use the following link:&lt;br&gt;
    &lt;a href="https://REPLACE_ME_WITH_API_ID.execute-api.REPLACE_ME_WITH_REGION.amazonaws.com/prod/mysfits" rel="noopener noreferrer"&gt;https://REPLACE_ME_WITH_API_ID.execute-api.REPLACE_ME_WITH_REGION.amazonaws.com/prod/mysfits&lt;/a&gt;&lt;br&gt;
Now, we need to include updated Python code for your backend Flask web service, to accommodates the new functionality of the website.&lt;br&gt;
Let's overwrite your existing codebase with these files and push them into the repository:&lt;br&gt;
     cd ~/environment/MythicalMysfitsService-Repository/&lt;br&gt;
     cp -r ~/environment/aws-modern-application-workshop/module-4/app/* .&lt;br&gt;
    git add .&lt;br&gt;
    git commit -m "Update service code backend to enable additional website features."&lt;br&gt;
    git push&lt;br&gt;
The service updates are being automatically pushed through your CI/CD pipeline.&lt;br&gt;
Open the location /environment/aws-modern-application-workshop/module-4/app/web/index.html which contains the new version of the website's index.html file.&lt;br&gt;
Make the relevant changes in the additional two html files along with index.html and save the files.&lt;br&gt;
Run the below command:&lt;br&gt;
    aws s3 cp --recursive ~/environment/aws-modern-application-workshop/module-4/web/ s3://YOUR-S3-BUCKET/&lt;br&gt;
Refresh the Mythical Mysfits website in your browser to see the new functionality in action&lt;/p&gt;

&lt;p&gt;We will be creating a way to better understand how users are interacting with the website.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1aaytm43zm4n45sorl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1aaytm43zm4n45sorl4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
We will create a new and decoupled service for the purpose of receiving user click events from the website. This full stack has been represented using a provided CloudFormation template.&lt;br&gt;
Let's create a new CodeCommit repository where the streaming service code will live:&lt;br&gt;
    aws codecommit create-repository --repository-name MythicalMysfitsStreamingService-Repository&lt;br&gt;
Copy the value of 'cloneUrlHttp' from the above output.&lt;br&gt;
Next, let's clone that new and empty repository into our IDE:&lt;br&gt;
    cd ~/environment/&lt;br&gt;
    git clone {insert the copied cloneValueUrl from above}&lt;br&gt;
We will be moving our working directory into this new repository:&lt;br&gt;
    cd ~/environment/MythicalMysfitsStreamingService-Repository/&lt;br&gt;
Let's copy the new application components into this new repository directory:&lt;br&gt;
    cp -r ~/environment/aws-modern-application-workshop/module-5/app/streaming/* .&lt;br&gt;
And finally let's copy the CloudFormation template for this module as well.&lt;br&gt;
    cp ~/environment/aws-modern-application-workshop/module-5/cfn/* .&lt;br&gt;
Run the below command to install the requests package and its dependencies locally alongside your function code:&lt;br&gt;
    pip install requests -t .&lt;br&gt;
Open the streamProcessor.py file and replace the values of the ApiEndpoint with the actual value.&lt;br&gt;
Now, Let's commit our code changes to the new repository so that they're saved in CodeCommit:&lt;br&gt;
    git add .&lt;br&gt;
    git commit -m "New stream processing service."&lt;br&gt;
    git push&lt;br&gt;
Let's create an S3 bucket where AWS SAM CLI will package all of our function code, upload it to S3, and create the deployable CloudFormation template.&lt;br&gt;
    aws s3 mb s3://REPLACE_ME_BUCKET_NAME&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1aaytm43zm4n45sorl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1aaytm43zm4n45sorl4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Now that our bucket is created, we are ready to use the SAM CLI to package and upload our code and transform the CloudFormation template.&lt;br&gt;
    sam package --template-file ./real-time-streaming.yml --output-template-file ./transformed-streaming.yml --s3-bucket replace-with-your-bucket-name&lt;br&gt;
Dont forget to replace the above value with the bucket name.&lt;br&gt;
Let's deploy the Stack Using AWS CloudFormation. Run the below command:&lt;br&gt;
    aws cloudformation deploy --template-file /home/ec2-user/environment/MythicalMysfitsStreamingService-Repository/cfn/transformed-streaming.yml --stack-name MythicalMysfitsStreamingStack --capabilities CAPABILITY_IAM&lt;br&gt;
Once this stack creation is complete, the full real-time processing microservice will be created.&lt;br&gt;
With the streaming stack up and running, we now need to publish a new version of our website frontend that includes the JavaScript that sends events to our service whenever a profile is clicked by a user.&lt;br&gt;
Below, is the location of the new index.html file:&lt;br&gt;
    ~/environment/aws-modern-application-workshop/module-5/web/index.html&lt;br&gt;
Make the relevant changes in the above file and perform the following command for the new streaming stack to retrieve the new API Gateway endpoint for your stream processing service:&lt;br&gt;
    aws cloudformation describe-stacks --stack-name MythicalMysfitsStreamingStack&lt;br&gt;
Finally, replace the final value within index.html for the streamingApiEndpoint and run the following command.&lt;br&gt;
    Refresh your Mythical Mysfits website in the browser and you have a site that records and publishes each time a user clicks on a profile.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn6tqo3rh7inklfdtwmu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn6tqo3rh7inklfdtwmu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
To view the records that have been processed, check the destination S3 bucket created as part of your MythicalMysfitsStreamingStack.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsfkoq6408c0dqn2ux2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsfkoq6408c0dqn2ux2u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Be sure to delete all the resources created in order to ensure that billing for the resources does not continue for longer than you intend.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0fpds20obrcy5tz5zv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0fpds20obrcy5tz5zv9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>bash</category>
      <category>microservices</category>
    </item>
    <item>
      <title>How to Build a Real-Time Leaderboard with Amazon Aurora Serverless and Amazon ElastiCache?</title>
      <dc:creator>Anudeep Rallapalli</dc:creator>
      <pubDate>Tue, 02 Feb 2021 06:14:26 +0000</pubDate>
      <link>https://dev.to/mrgunneramz/how-to-build-a-real-time-leaderboard-with-amazon-aurora-serverless-and-amazon-elasticache-7eb</link>
      <guid>https://dev.to/mrgunneramz/how-to-build-a-real-time-leaderboard-with-amazon-aurora-serverless-and-amazon-elasticache-7eb</guid>
      <description>&lt;p&gt;Every Kid inside us loves to play Games. Especially the one's where we compete with our peers. We all loved to see our name on the top of that Leaderboard! Lets see how that Leaderboard is build using Amazon Aurora Serverless and Amazon ElastiCache.&lt;/p&gt;

&lt;p&gt;What is Amazon Aurora Serverless?&lt;br&gt;
• Amazon Aurora is a highly performant, cloud-native relational database offering from AWS that offers both MySQL-compatible and PostgreSQL-compatible editions. &lt;br&gt;
• The Serverless offering of the Aurora database provides on-demand automatic scaling capabilities as well as the Data API, a fast, secure method for accessing your database over HTTP.&lt;/p&gt;

&lt;p&gt;What is the Amazon ElastiCache?&lt;br&gt;
• Amazon ElastiCache is a fully managed, in-memory data store service from AWS for use cases that require blazing-fast response times. &lt;br&gt;
• You can use Redis or Memcached with ElastiCache.&lt;/p&gt;

&lt;p&gt;Why use both Amazon Aurora Serverless and ElastiCache for a Game Application?&lt;br&gt;
• Amazon Aurora Serverless and Amazon ElastiCache are both commonly used individually in game applications. &lt;br&gt;
• Together, they provide a combination of top-tier speed of an in-memory cache with the reliability and flexibility of a relational database. &lt;br&gt;
• We will be using Amazon ElastiCache for high-volume performance, low-latency leaderboard checks for different games, and Amazon Aurora Serverless to store all historical data and provide redundancy for the leaderboard data&lt;/p&gt;

&lt;p&gt;Technologies Used:&lt;br&gt;
• Amazon Aurora Serverless for data storage, including the Data API for HTTP-based database access from your Lambda function.&lt;br&gt;
• AWS Secrets Manager for storing your database credentials when using the Data API.&lt;br&gt;
• Amazon ElastiCache for data storage of global leaderboards, using the Redis engine and Sorted Sets to store your leaderboards.&lt;br&gt;
• Amazon Cognito for user registration and authentication.&lt;br&gt;
• AWS Lambda for compute.&lt;br&gt;
• Amazon API Gateway for HTTP-based access to your Lambda function.&lt;/p&gt;

&lt;p&gt;As part of the application, we need a leaderboard system where users can compare their total score against other players as well as view the leaderboard for specific levels in the game. Finally, a user may want to see all of the scores they’ve received in a particular level.&lt;/p&gt;

&lt;p&gt;Let's see how we use Amazon Aurora Serverless and Amazon ElastiCache to handle these access patterns.&lt;/p&gt;

&lt;p&gt;• Set up an AWS Account and Configure an AWS Cloud9 Instance.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbo0zkmnyjoa43radtes7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbo0zkmnyjoa43radtes7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Provision Amazon Aurora Serverless database and save the database credentials to AWS Secret Manager for use with Data API.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3dp81mskgai4ydzfek9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3dp81mskgai4ydzfek9t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ffl3aiehn3u53y8ucaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ffl3aiehn3u53y8ucaq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Create an Entity-Relationship Diagram (ERD) to plan your data model.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbg0cvp3xccj2oik5cv8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbg0cvp3xccj2oik5cv8p.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Create a table that matches your ERD and load some data into the database.&lt;br&gt;
• Run sample queries on the Database to handle some of the use cases.&lt;br&gt;
• Launch an ElastiCache for Redis Cluster.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2cwbqbydcbamorqepuri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2cwbqbydcbamorqepuri.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Configure the Security Group on the Redis Instance. This will help us access the Instance from the Cloud9 development environment.&lt;br&gt;
• Test the configuration by connecting to the Redis Instance.&lt;br&gt;
• We will be using Redis Sorted Sets for fast lookups in ElastiCache.&lt;br&gt;
• Load your Sample Data into multiple Sorted Sets and read the top items from the Sorted Sets.&lt;br&gt;
• Create and Configure an Amazon Cognito User Pool &amp;amp; Client for the User Pool for User Registration, Login and Verification.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0a12qypguvu4k4ncwow0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0a12qypguvu4k4ncwow0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Deploy a Lambda Function and Configure REST API with API Gateway. Invoke an Endpoint to test the application.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1rdy68b6razj86zl4v5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1rdy68b6razj86zl4v5s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
• Test the Application:&lt;br&gt;
a.  First, you start with a Registration endpoint, where a new user signs up and creates their account.&lt;br&gt;
b.  Second, you use a Login endpoint where a user can use a client (such as a web application or a mobile app) to authenticate and receive an ID token.&lt;br&gt;
c.  Third, you use a AddUserScore endpoint to record a score for a user.&lt;br&gt;
d.  Fourth, you use the FetchUserScores endpoint to retrieve the top scores for a particular user.&lt;br&gt;
e.  Finally, you use the FetchTopScores endpoint to retrieve the global top scores for the current day and month as well as the top scores of all time.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>database</category>
    </item>
  </channel>
</rss>
