<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Omar Omar</title>
    <description>The latest articles on DEV Community by Omar Omar (@omarcloud20).</description>
    <link>https://dev.to/omarcloud20</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/omarcloud20"/>
    <language>en</language>
    <item>
      <title>AWS Cognito for Bedrock Storyteller running on Containerized Lambda Function with Web Adapter</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Sun, 31 Mar 2024 21:10:03 +0000</pubDate>
      <link>https://dev.to/omarcloud20/aws-cognito-for-bedrock-story-teller-running-on-lambda-function-3e7o</link>
      <guid>https://dev.to/omarcloud20/aws-cognito-for-bedrock-story-teller-running-on-lambda-function-3e7o</guid>
      <description>&lt;p&gt;This is a sample application that demonstrates how to use AWS Cognito to add login functionality to a Bedrock Storyteller Flask application running on AWS Lambda with Web Adapter. The main purpose of this tutorial is to exhibit how to use AWS Cognito to authenticate users and authorize them to access the application.&lt;/p&gt;

&lt;p&gt;Adding logging in and logging out functionality to your application makes it more secure. AWS Cognito is a great service that provides authentication, authorization, and user management for your applications. It is easy to use and provides a lot of features out of the box.&lt;/p&gt;

&lt;p&gt;Throughout this tutorial, you will learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new AWS Cognito User Pool&lt;/li&gt;
&lt;li&gt;Add a new user to the User Pool&lt;/li&gt;
&lt;li&gt;Confirm the user using pre-signup Lambda trigger&lt;/li&gt;
&lt;li&gt;Update the Bedrock Storyteller Flask application to use AWS Cognito for authentication&lt;/li&gt;
&lt;li&gt;Deploy the application to a containerized AWS Lambda with Web Adapter&lt;/li&gt;
&lt;li&gt;Use AWS SAM to deploy the application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ultimate goal is to learn how to use AWS Cognito to authenticate users and authorize them to access the application. By the end of this tutorial, you will have a Bedrock Storyteller Flask application running on AWS Lambda with Web Adapter that uses AWS Cognito for authentication and authorization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI installed and configured&lt;/li&gt;
&lt;li&gt;Docker installed&lt;/li&gt;
&lt;li&gt;Python installed&lt;/li&gt;
&lt;li&gt;AWS SAM CLI installed&lt;/li&gt;
&lt;li&gt;Clone the &lt;a href="https://github.com/OmarCloud20/aws-cognito-bedrock-lambda"&gt;Bedrock Storyteller Flask application&lt;/a&gt; repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create a new AWS Cognito User Pool
&lt;/h2&gt;

&lt;p&gt;1- Open the AWS Management Console and navigate to the AWS Cognito service.&lt;br&gt;
2- Click on &lt;code&gt;Create user pool&lt;/code&gt;.&lt;br&gt;
3- Check the &lt;code&gt;Email&lt;/code&gt; and then click on &lt;code&gt;Next&lt;/code&gt;.&lt;br&gt;
4- Leave the &lt;code&gt;Password policy&lt;/code&gt; as default and select &lt;code&gt;No MFA&lt;/code&gt;.&lt;br&gt;
5- Uncheck the &lt;code&gt;Enable self-service account recovery&lt;/code&gt; and then click on &lt;code&gt;Next&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwovsmbm0oecqxzxe0kb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwovsmbm0oecqxzxe0kb.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6- Check the &lt;code&gt;Enable self-registration&lt;/code&gt;, select &lt;code&gt;Don't automatically send messages&lt;/code&gt;, leave the rest as default, and then click on &lt;code&gt;Next&lt;/code&gt;.&lt;br&gt;
7- Select &lt;code&gt;Send email with Cognito&lt;/code&gt; and then click on &lt;code&gt;Next&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyu5q1sai8ya9rjwwwsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyu5q1sai8ya9rjwwwsd.png" alt="Image description" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8- Enter a name for the user pool and &lt;code&gt;App client name&lt;/code&gt; and select &lt;code&gt;Generate a client secret&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoavz1ouvmyccxue9beo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoavz1ouvmyccxue9beo.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9- Under &lt;code&gt;Advanced app client settings&lt;/code&gt;, make sure you select &lt;code&gt;(ALLOW_REFRESH_TOKEN_AUTH), (ALLOW_ADMIN_USER_PASSWORD_AUTH), (ALLOW_USER_PASSWORD_AUTH)&lt;/code&gt; and then click on &lt;code&gt;Next&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;10- Finally, review the &lt;code&gt;Review and create&lt;/code&gt; page and then click on &lt;code&gt;Create user pool&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Capture the following information as you will need it later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User Pool ID&lt;/li&gt;
&lt;li&gt;App client ID&lt;/li&gt;
&lt;li&gt;App client secret (navigate to &lt;code&gt;App integration&lt;/code&gt; -&amp;gt; &lt;code&gt;App clients and analytics&lt;/code&gt; -&amp;gt; click on &lt;code&gt;App client name&lt;/code&gt; -&amp;gt; &lt;code&gt;Show client secret&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Step 2: Add a new user to the User Pool
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Creating a Lambda function to confirm new users
&lt;/h3&gt;

&lt;p&gt;1- Navigate to the AWS Lambda service.&lt;br&gt;
2- Click on &lt;code&gt;Create function&lt;/code&gt;.&lt;br&gt;
3- Enter a name for the function, select &lt;code&gt;Python 3.12&lt;/code&gt; as the runtime, and then click on &lt;code&gt;Create function&lt;/code&gt;.&lt;br&gt;
4- Copy the code from the &lt;code&gt;lambda_function.py&lt;/code&gt; file in the &lt;code&gt;assets&lt;/code&gt; directory and paste it into the Lambda function code editor. Then click on &lt;code&gt;Deploy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay1133hldfnhyelyjh9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay1133hldfnhyelyjh9t.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Adding the Lambda trigger to the User Pool
&lt;/h3&gt;

&lt;p&gt;1- Navigate back to the AWS Cognito service.&lt;br&gt;
2- Click on the newly created user pool.&lt;br&gt;
3- Select &lt;code&gt;User pool properties&lt;/code&gt; and click on &lt;code&gt;Add Lambda trigger&lt;/code&gt;.&lt;br&gt;
4- Select &lt;code&gt;Sign-up&lt;/code&gt; as a Trigger type and &lt;code&gt;Pre sign-up trigger&lt;/code&gt; as a lambda trigger type. &lt;br&gt;
5- Under &lt;code&gt;Lambda function&lt;/code&gt;, assign the lambda function that you created earlier and then click on &lt;code&gt;Add Lambda trigger&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mmlfe4cgokl6s26rl9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mmlfe4cgokl6s26rl9w.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiolvv2sth2snhj8g6kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiolvv2sth2snhj8g6kt.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsqfrtk2m8js172tsd80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsqfrtk2m8js172tsd80.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The purpose of having a pre-signup Lambda trigger is to confirm the user before adding them to the user pool. Usually, the users confirm their email address by clicking on the link sent to their email address. In this tutorial, we are confirming the user automatically using the Lambda trigger because we are not sending any email to the users and we are limiting access to the application to only the users that we add manually. Moreover, we can use dummy email addresses for the users.&lt;/p&gt;
&lt;h3&gt;
  
  
  Adding a new user to the User Pool
&lt;/h3&gt;

&lt;p&gt;We will add a new user to the user pool using the AWS CLI and the Lambda trigger will confirm the user automatically.&lt;/p&gt;

&lt;p&gt;1- Compute a secret stash value for the user's password using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"[username][app client ID]"&lt;/span&gt; | openssl dgst &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-hmac&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;app client secret] &lt;span class="nt"&gt;-binary&lt;/span&gt; | openssl enc &lt;span class="nt"&gt;-base64&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Actual example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"omar@omar.com4njih1dqgurs5r529l0mgl4j21"&lt;/span&gt; | openssl dgst &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-hmac&lt;/span&gt; r6v7qk0ui0aunhkc4jk8eh1pb4g8nl4715fg4l0aejfithdmg5r &lt;span class="nt"&gt;-binary&lt;/span&gt; | openssl enc &lt;span class="nt"&gt;-base64&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Replace &lt;code&gt;[username]&lt;/code&gt; with the email of the user you want to add.&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;[app client ID]&lt;/code&gt; with the app client ID of the user pool.&lt;/li&gt;
&lt;li&gt;Replace &lt;code&gt;[app client secret]&lt;/code&gt; with the app client secret of the user pool.&lt;/li&gt;
&lt;li&gt;Save the output of the command as the secret stash value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To learn more about how to compute the secret stash value, refer to the &lt;a href="https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html#cognito-user-pools-computing-secret-hash"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My username is &lt;code&gt;omar@omar.com&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My app client ID is &lt;code&gt;4njih1dqgurs5r529l0mgl4j21&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My app client secret is &lt;code&gt;r6v7qk0ui0aunhkc4jk8eh1pb4g8nl4715fg4l0aejfithdmg5r&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2- Add the user to the user pool using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cognito-idp sign-up &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--client-id&lt;/span&gt; &lt;span class="s1"&gt;'APP_CLIENT_ID'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--secret-hash&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;SECRET_HASH&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s1"&gt;'USERNAME'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--password&lt;/span&gt; &lt;span class="s1"&gt;'PASSWORD'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--user-attributes&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;email,Value&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;USERNAME&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="s1"&gt;'REGION'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--profile&lt;/span&gt; &lt;span class="s1"&gt;'PROFILE'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Actual example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cognito-idp sign-up &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--client-id&lt;/span&gt; 4njih1dqgurs5r529l0mgl4j21 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--secret-hash&lt;/span&gt; o1364n3vGN3BBLJNJCb858KjGDMdx+Jyt85KIFVJwXc&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--username&lt;/span&gt; omar@omar.com &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--password&lt;/span&gt; Test_Pass123 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--user-attributes&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;email,Value&lt;span class="o"&gt;=&lt;/span&gt;omar@omar.com &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--profile&lt;/span&gt; default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace &lt;code&gt;[PROFILE]&lt;/code&gt; with the name of your AWS profile, if it's not the default profile.&lt;/li&gt;
&lt;li&gt;Once you run the command, you could navigate to the AWS Cognito service and check the &lt;code&gt;Users&lt;/code&gt; tab to see the newly added user. Also, if you navigate to the Monitor section of the Lambda function, you will see that the Lambda function was triggered and the user was confirmed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx94b3mo5sa1yid6ih7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx94b3mo5sa1yid6ih7.png" alt="Image description" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Update the Bedrock Storyteller Flask application to use AWS Cognito for authentication
&lt;/h2&gt;

&lt;p&gt;1- Open the Bedrock Storyteller Flask application in your favorite code editor.&lt;br&gt;
2- Update the values of the following variables in the &lt;code&gt;cognito_config.py&lt;/code&gt; file in the &lt;code&gt;app&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;COGNITO_REGION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;COGNITO_USER_POOL_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;us-east-1_Q0l9hNEw1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;COGNITO_CLIENT_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2p87n43ntcdv8f4ml5jh3cd8ji&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;COGNITO_CLIENT_SECRET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;vivq45062h98669dgv3rthnhppi6da0virnrlapn02b7cs4r54l&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3- Notice in line 16 of the &lt;code&gt;app.py&lt;/code&gt; file in the &lt;code&gt;app&lt;/code&gt; directory that we are using the &lt;code&gt;app.secret_key&lt;/code&gt;. This is used to sign the session cookie. You can generate a secret key using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'import secrets; print(secrets.token_hex())'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Refer to the &lt;a href="https://flask.palletsprojects.com/en/2.2.x/config/#SECRET_KEY"&gt;Flask documentation&lt;/a&gt; to learn how to compute the secret key value.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 4: Deploy the application to a containerized AWS Lambda with Web Adapter using AWS SAM
&lt;/h2&gt;

&lt;p&gt;1- Using your terminal, navigate to the root directory of the Bedrock Storyteller Flask application.&lt;/p&gt;

&lt;h3&gt;
  
  
  The directory structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;├── README.md
├── assets
├── imgs
├── app
│   ├── Dockerfile
│   ├── __init__.py
│   ├── app.py
│   ├── cognito_config.py
│   ├── requirements.txt
│   ├── static
│   └── templates
│       ├── about.html
│       ├── index.html
│       └── login.html
└── template.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2- Build the Docker image using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sam build &lt;span class="nt"&gt;--use-container&lt;/span&gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;NAME_OF_YOUR_AWS_PROFILE&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace &lt;code&gt;[NAME_OF_YOUR_AWS_PROFILE]&lt;/code&gt; with the name of your AWS profile, if it's not the default profile.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3- Deploy the application using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sam deploy &lt;span class="nt"&gt;--guided&lt;/span&gt; &lt;span class="nt"&gt;--profile&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;NAME_OF_YOUR_AWS_PROFILE&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow the prompts to deploy the application:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Stack Name &lt;span class="o"&gt;[&lt;/span&gt;sam-app]: Bedrock Storyteller
AWS Region &lt;span class="o"&gt;[&lt;/span&gt;us-east-1]: us-east-1
Confirm changes before deploy &lt;span class="o"&gt;[&lt;/span&gt;y/N]: y
Allow SAM CLI IAM role creation &lt;span class="o"&gt;[&lt;/span&gt;Y/n]: y
Disable rollback on stack creation failures &lt;span class="o"&gt;[&lt;/span&gt;y/N]: n
FlaskFunction Function URL has no authentication. Is this okay? &lt;span class="o"&gt;[&lt;/span&gt;y/N]: y
Save arguments to configuration file &lt;span class="o"&gt;[&lt;/span&gt;Y/n]: y
SAM configuration file &lt;span class="o"&gt;[&lt;/span&gt;samconfig.toml]: &lt;span class="sb"&gt;`&lt;/span&gt;click enter&lt;span class="sb"&gt;`&lt;/span&gt;
SAM configuration environment &lt;span class="o"&gt;[&lt;/span&gt;default]: &lt;span class="sb"&gt;`&lt;/span&gt;click enter&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4- Once the deployment is complete, you will see the output with the &lt;code&gt;FlaskFunction&lt;/code&gt; URL. Copy the URL and open it in your browser.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you learned how to use AWS Cognito to authenticate users and authorize them to access a Bedrock Storyteller Flask application running on AWS Lambda with Web Adapter. You created a new AWS Cognito User Pool, added a new user to the User Pool, confirmed the user using a pre-signup Lambda trigger, updated the Bedrock Storyteller Flask application to use AWS Cognito for authentication, and deployed the application to a containerized AWS Lambda with Web Adapter using AWS SAM.&lt;/p&gt;

&lt;p&gt;The next step is to explore adding AWS Cognito sign-in and sign-out functionality to the application. I hope you have enjoyed this tutorial and found it helpful. Feel free to reach out if you have any questions or feedback.&lt;/p&gt;




&lt;h2&gt;
  
  
  Future Work
&lt;/h2&gt;

&lt;p&gt;[ ] enhance data chunking of the Bedrock Storyteller Flask application.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cognito/"&gt;AWS Cognito&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/awslabs/aws-lambda-web-adapter/tree/main/examples/fastapi-response-streaming"&gt;fastapi-response-streaming&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>bedrock</category>
      <category>lambda</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>Alexa to Run Systems Manager Documents</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Sun, 25 Dec 2022 19:30:04 +0000</pubDate>
      <link>https://dev.to/aws-builders/alexa-to-run-systems-manager-documents-4akc</link>
      <guid>https://dev.to/aws-builders/alexa-to-run-systems-manager-documents-4akc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Alexa is Amazon's cloud-based voice service that powers hundreds of millions of devices. It also enables developers to build &lt;a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit" rel="noopener noreferrer"&gt;skills&lt;/a&gt;, which are like applications for Alexa. The Alexa skill is a cloud-based solution that provides the logic and functionality to perform certain tasks using voice commands. The skill is hosted on AWS Lambda and is built using &lt;a href="https://developer.amazon.com/en-US/docs/alexa/sdk/alexa-skills-kit-sdks.html" rel="noopener noreferrer"&gt;Alexa Skills Kit&lt;/a&gt; SDK (ASK) framework. The communication between &lt;a href="https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/get-started-with-alexa-voice-service.html" rel="noopener noreferrer"&gt;Alexa service&lt;/a&gt; and the Lambda function hosting the Alexa skill is &lt;a href="https://developer.amazon.com/en-US/docs/alexa/custom-skills/host-a-custom-skill-as-an-aws-lambda-function.html#:~:text=continuously%20run%20servers.-,Alexa%20encrypts,-its%20communications%20with" rel="noopener noreferrer"&gt;encrypted&lt;/a&gt; and the access permissions to the Lambda function are protected by AWS Identity and Access Management (IAM) policies. Therefore, we can be confident that Alexa skills are secure. &lt;/p&gt;

&lt;p&gt;This step-by-step tutorial walks you through the process of developing an Alexa cloud-based solution. We will build our Alexa skill using Alexa Skills Kit SDK (ASK) for Python that will allow us to run &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html" rel="noopener noreferrer"&gt;AWS Systems Manager Documents&lt;/a&gt; (SSM documents) using &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html" rel="noopener noreferrer"&gt;Run Command&lt;/a&gt;, which is a capability of Systems Manager. The SSM documents run on AWS Systems Manager &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/managed_instances.html" rel="noopener noreferrer"&gt;managed nodes&lt;/a&gt; (SSM managed EC2 instances) using &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/run-remote-documents.html" rel="noopener noreferrer"&gt;AWS-RunDocument&lt;/a&gt;. Utilizing the AWS-RunDocument SSM document with Run Command enables us to run any SSM document on our managed nodes without having to modify our backend source code (Python). It is a great way to standardize the process of running SSM documents on our managed nodes. The SSM documents define the actions to perform on the managed instances and can be used to perform a variety of tasks such as patching OS, stopping EC2 instances, installing software, updating software, configuring operating systems, and much more. Basically, sky is the limit when it comes to the functionality of running SSM documents.&lt;/p&gt;

&lt;p&gt;It is worth mentioning that this serverless solution is built using AWS services and components with minimal costs associated. The solution is also built using the best practices and the least privilege principle. The access permissions to the AWS services and components are protected by AWS Identity and Access Management (IAM) policies. Therefore, we can be confident that our solution is secure.&lt;/p&gt;

&lt;p&gt;The ultimate goal of the tutorial is to simplify the process of building this cloud-based solution, learning how to create an Alexa skill and learning how to create several AWS services and components such as, DynamoDB table, IAM service role, SNS topic, Lambda functions, Secrets Manager secret, and SSM documents. Moreover, it is a great opportunity to learn about the logic behind the solution and the architecture of the solution. &lt;/p&gt;

&lt;p&gt;As an AWS Community Builder, this is part of my continuous effort to share knowledge and experience with the community. The value added by this solution is to run SSM commands on our managed instances without having to log into the AWS console or to use the AWS CLI. We can perform tasks remotely and securely from anywhere using Alexa enabled devices such as an Echo device or Alexa mobile app. Run commands and tasks remotely on our managed instances using Alexa is a great way to save time and to automate the process of running SSM documents on our managed instances.&lt;br&gt;
I intend keep the tutorial as simple as possible and will not go into the details unless necessary. I will pass the baton to you to take the solution to the next level. I will also intentionally intermingle between SSM documents and SSM commands. Although, the solution could be built using IaC, we would have lost the learning experience. &lt;/p&gt;

&lt;p&gt;Remember, the goal of the tutorial is to learn and to understand the logic and architecture of the solution. It's about the journey, not the destination, so let's buckle up and get started!&lt;/p&gt;


&lt;h2&gt;
  
  
  Alexa Cloud Solution: Logic and Architecture:
&lt;/h2&gt;

&lt;p&gt;The solution consists of several AWS services and components as shown on the diagram below. The logic flow might be a bit complex at first, but it becomes clearer as we go through the tutorial. &lt;/p&gt;

&lt;p&gt;1- The user wakes up the Alexa skill by saying, &lt;code&gt;Alexa, open {name of the skill}&lt;/code&gt; on Alexa enabled devices such as an Echo device or Alexa mobile app. Our skill is called &lt;code&gt;Command Control&lt;/code&gt;.&lt;br&gt;
2- The user initiates a voice command to run an SSM command for a specific instance tag by saying, &lt;code&gt;{name of the command} {name of the tag}&lt;/code&gt;, for example, &lt;code&gt;patch dev&lt;/code&gt;. Ofcourse, we can change the utterances to match our needs. But for the purpose of this tutorial, we will keep it simple. &lt;br&gt;
3- The Alexa service sends a JSON body request to invoke the Lambda function hosting the Alexa skill (AlexaSkill function). The AlexaSkill Lambda function validates the request and extracts the command and tag from the request.&lt;br&gt;
4- The AlexaSkill function sends a payload of the command and tag to the Master Lambda function. The Master Lambda function is invoked &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-sync.html" rel="noopener noreferrer"&gt;synchronously&lt;/a&gt;.&lt;br&gt;
5- Based on the command and tag received, the Master Lambda function queries a DynamoDB table to obtain the SSM document name and any SSM document parameters corresponds to the command.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: not all SSM documents require parameters. It's our responsibility to make sure that the SSM document name and any SSM document parameters are valid. The parameters in the DynamoDB table should be in JSON format.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;6- The Master Lambda function validates the response from the DynamoDB table. If the response is not valid, the Master Lambda function returns error message to AlexaSkill function. The handled errors could be such as, the Command, SSM document name, any SSM document parameters or even the DynamoDB table name are not valid or don't exist.&lt;br&gt;
7- The Master Lambda function then queries whether or not the specified tag exists for any &lt;strong&gt;running&lt;/strong&gt; EC2 instances in the region (the instance will be ignored if it's not in the running status). If the condition is met (tag exists and the instance is running), the Master Lambda function then queries whether or not the instances are SSM managed nodes. Again, if this condition is met (the instances are SSM managed nodes), the Master Lambda function sends the &lt;code&gt;AWS-RunDocument&lt;/code&gt; attributes to Run Command to run the SSM document on the EC2 instances. If the condition is not met, the Master Lambda function returns error message to AlexaSkill function. The handled errors could be such as, the specified tag does not exist for any running and SSM managed EC2 instances.&lt;/p&gt;

&lt;p&gt;8- The Run Command runs the SSM document on the EC2 instances filtering by the specified tag. Once Run Command has started executing the SSM document on the EC2 instances successfully, the &lt;code&gt;Pending&lt;/code&gt; status is to be returned to the Master Lambda function.&lt;br&gt;
9- The Master Lambda function sends the &lt;code&gt;Pending&lt;/code&gt; status, command, tag and number of instances the SSM document is running on back to AlexaSkill function and AlexaSkill function sends the response to the user.&lt;br&gt;
10- The Run Command also sends notifications to the SNS topic to notify the user once the SSM document has been run on the EC2 instances and also when the status has been updated to &lt;code&gt;InProgress&lt;/code&gt;, &lt;code&gt;Failed&lt;/code&gt; or &lt;code&gt;Success&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if you prefer to receive notifications via email, SMS, or any other method allowed by SNS, you can subscribe to the SNS topic. It's also worth noting that the Run Command records the status of the SSM document in the SSM document history. Therefore, we can always go to the Run Command history to see the status of the SSM document.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Important Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Master Lambda function only runs SSM documents on running and SSM managed nodes. Therefore, if the EC2 instance is not running or not SSM managed, the Master Lambda function will ignore it.&lt;/li&gt;
&lt;li&gt;To create SSM managed nodes, we need to install the SSM agent on the EC2 instance. The SSM agent is installed by default on Amazon Linux 2 EC2 instances. However, if we are using other Linux distributions, we need to install the SSM agent manually. Then, we need to attach an instance profile IAM role with the &lt;code&gt;AmazonSSMManagedInstanceCore&lt;/code&gt; policy to the EC2 instance. Please, refer to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up-ec2.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more details on to setup SSM managed nodes.&lt;/li&gt;
&lt;li&gt;We also need to tag the EC2 instances with a key-value pair to filter the EC2 instances. I have designated &lt;code&gt;Alexa&lt;/code&gt; to be the constant tag key, and we need to specify the tag value in the voice command to match the tag value we have specified for the EC2 instances. Therefore, if the instance does not have &lt;code&gt;Alexa&lt;/code&gt; as the tag key, the Master Lambda function ignores the instance. If we need to use a different tag key, we should modify the &lt;code&gt;tag_key = 'Alexa&lt;/code&gt; in the Master Lambda function Python source code.&lt;/li&gt;
&lt;li&gt;As best practices, I have configured intensive code level logging to cover lots of error scenarios, which should be helpful for troubleshooting and debugging. If an error is raised, the Master Lambda function sends an error message to AlexaSkill and eventually back to the user. The error messages are also logged in CloudWatch Logs for the Master Lambda function log group.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  Alexa Cloud-based Solution Architecture Diagram
&lt;/h3&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu5nc03vg6at1h40slmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu5nc03vg6at1h40slmm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Tutorial Sections:
&lt;/h2&gt;

&lt;p&gt;The solution is broken down into several sections to make it easier to follow. The sections are as following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating an Alexa skill hosted on AWS Lambda function (AlexaSkill function) using the Alexa Skill Kit (ASK) SDK for Python and Boto3 SDK for Python&lt;/li&gt;
&lt;li&gt;Configuring the skill with Alexa service on Alexa Developer Console&lt;/li&gt;
&lt;li&gt;Creating an SSM document to stop running EC2 instances. We will also use AWS SSM document &lt;code&gt;AWS-RunPatchBaseline&lt;/code&gt; to patch Amazon Linux 2 EC2 instances. This covers the two scenarios; running AWS managed SSM documents and customer created SSM documents&lt;/li&gt;
&lt;li&gt;Creating an SNS topic to publish notifications from Systems Manager - Run Command&lt;/li&gt;
&lt;li&gt;Creating an IAM service role for Systems Manager to send notifications to the SNS topic&lt;/li&gt;
&lt;li&gt;Provisioning an on-demand DynamoDB table to store the commands, SSM document names and any SSM document parameters that are used by the Master Lambda function&lt;/li&gt;
&lt;li&gt;Creating a Lambda function (MasterLambda) to send SSM commands to Systems Manager - Run Command&lt;/li&gt;
&lt;li&gt;Creating a Lambda function (SlackLambda) to send notifications to Slack or you may use any other method of your choice (optional)&lt;/li&gt;
&lt;li&gt;Testing Alexa skill and Run Command&lt;/li&gt;
&lt;li&gt;Party time 🎉🎉🎉&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;An &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;An &lt;a href="https://developer.amazon.com/" rel="noopener noreferrer"&gt;AWS Alexa Developer account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Python 3 (&amp;gt;= 3.6)&lt;/li&gt;
&lt;li&gt;pip&lt;/li&gt;
&lt;li&gt;virtualenv or venv&lt;/li&gt;
&lt;li&gt;Alexa enabled device or Alexa app on your mobile device&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  1: Creating Alexa skill hosted on AWS Lambda function (AlexaSkill)
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Step 1: Setting up the ASK SDK in a virtual environment on Linux or macOS
&lt;/h4&gt;

&lt;p&gt;1- Create a virtual environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;virtualenv command_control
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Note: &lt;code&gt;commands&lt;/code&gt; is the name of the virtual environment. You can use any name you want. Also for Windows, you may refer to &lt;a href="https://developer.amazon.com/en-US/docs/alexa/alexa-skills-kit-sdk-for-python/set-up-the-sdk.html#set-up-sdk-in-virtual-environment:~:text=Option%201%3A%20Set%20up%20the%20SDK%20in%20a%20virtual%20environment" rel="noopener noreferrer"&gt;Alexa documentation&lt;/a&gt; for more details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;2- Activate the virtual environment:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source command_control/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;3- Install the ASK SDK:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install ask-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Note: The ASK SDK is installed in the virtual environment. You can use the &lt;code&gt;deactivate&lt;/code&gt; command to exit the virtual environment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  Step 2: Add skill source code
&lt;/h4&gt;

&lt;p&gt;Create a Python file named &lt;code&gt;lambda_function.py&lt;/code&gt; in the &lt;code&gt;command_control&lt;/code&gt; root directory and copy the Python code from this &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents/blob/main/files/lambda_function.py" rel="noopener noreferrer"&gt;Lambda&lt;/a&gt; function.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: it's important to name the file &lt;code&gt;lambda_function.py&lt;/code&gt; because the Lambda function will be looking for this handler when it's invoked.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  Step 3: Packaging and Creating Lambda Function (AlexaSkill function)
&lt;/h4&gt;

&lt;p&gt;1- From the &lt;code&gt;command_control&lt;/code&gt; root directory, copy the lambda_function.py into &lt;code&gt;lib/python3.8/site-packages/&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd command_control
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp lambda_function.py lib/python3.8/site-packages/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;2- Navigate to the &lt;code&gt;site-packages&lt;/code&gt; directory to create a zip file of the dependencies:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd lib/python3.8/site-packages
zip -r lambda-package.zip .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Note: There are multiple ways to create a zip file for the package. You can use any method you prefer or refer to &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;3- Navigate to AWS Lambda console and create a new function with the following settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Function name: &lt;code&gt;AlexaSkill&lt;/code&gt; or any name you prefer&lt;/li&gt;
&lt;li&gt;Runtime: Python 3.9&lt;/li&gt;
&lt;li&gt;Architecture: x86_64&lt;/li&gt;
&lt;li&gt;Role: Create a new role with basic Lambda permissions&lt;/li&gt;
&lt;li&gt;Click Create function&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: we will come back to add all necessary permissions to the lambda IAM role later.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5vh6hiklfnf9os20toq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5vh6hiklfnf9os20toq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Upload the &lt;code&gt;lambda_package.zip&lt;/code&gt; file to the Lambda function.&lt;/p&gt;

&lt;p&gt;5- Copy the ARN of the Lambda function. We will need it later.&lt;/p&gt;


&lt;h2&gt;
  
  
  2: Configuring skill with Alexa service via the Alexa Developer Console
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Step 1: Configuring Alexa service
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;a href="https://developer.amazon.com/alexa/console/ask" rel="noopener noreferrer"&gt;Alexa Developer Console&lt;/a&gt; and login to create a skill.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Create Skill&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnuoy0tz7lq3yjtxngbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnuoy0tz7lq3yjtxngbk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Name the skill &lt;code&gt;Command Control&lt;/code&gt; and click &lt;code&gt;Next&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkie0r1idft235jterw3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkie0r1idft235jterw3k.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For &lt;code&gt;Experience, Model, Hosting service&lt;/code&gt; page, select options as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Other&lt;/code&gt; for &lt;code&gt;Choose a type of experience&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Custom&lt;/code&gt; for &lt;code&gt;Choose a model&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Provision your own&lt;/code&gt; for &lt;code&gt;Hosting service&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, click on the &lt;code&gt;Next&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd7u0nntjdyf0svkwecv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd7u0nntjdyf0svkwecv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On &lt;code&gt;Templates&lt;/code&gt; page, select the &lt;code&gt;Start from scratch&lt;/code&gt; option and click on the &lt;code&gt;Next&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;On &lt;code&gt;Review&lt;/code&gt; page, click on the &lt;code&gt;Create Skill&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Step 2: Configuring skill with Alexa Service
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;code&gt;CUSTOM&lt;/code&gt; section on the left hand side menu, click on &lt;code&gt;Interaction Model&lt;/code&gt;. Then, click on &lt;code&gt;JSON Editor&lt;/code&gt; tab. Copy the JSON text from this &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents/blob/main/files/interaction-model.json" rel="noopener noreferrer"&gt;interaction model&lt;/a&gt; file and paste it into the JSON editor. Then, click on the &lt;code&gt;Save Model&lt;/code&gt; button followed by the &lt;code&gt;Build Model&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the JSON text is the interaction model for the skill. It's a quicker way to define the intents and utterances for the skill instead of doing it manually. To learn more about the interaction model, refer to &lt;a href="https://developer.amazon.com/en-US/docs/alexa/custom-skills/create-the-interaction-model-for-your-skill.html" rel="noopener noreferrer"&gt;Alexa documentation&lt;/a&gt;. Also, ensure that you have not received any errors while building the model.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxfkwickclashk3ioguo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxfkwickclashk3ioguo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;code&gt;CUSTOM&lt;/code&gt; section on the left hand side menu, click on &lt;code&gt;Endpoint&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;code&gt;AWS Lambda ARN&lt;/code&gt; option, which is selected by default. Then, copy &lt;code&gt;Your Skill ID&lt;/code&gt; from the &lt;code&gt;Endpoint&lt;/code&gt; page. The Skill ID is the unique identifier for the skill. It allows Alexa to identify the skill and to invoke the Lambda function securely. We will need it later.&lt;/li&gt;
&lt;li&gt;We will come back to this page to add our AlexaSkill Lambda function ARN.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbddq106fwhrxhtnline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbddq106fwhrxhtnline.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3: Configuring Lambda function hosting Alexa skill with Alexa service
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate back to the AWS Lambda console and select the &lt;code&gt;AlexaSkill&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;Add an Alexa trigger to the Lambda function as shown below and paste the &lt;code&gt;Your Skill ID&lt;/code&gt; from the &lt;code&gt;Endpoint&lt;/code&gt; page. Then, click on the &lt;code&gt;Add&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Head back to the Alexa Developer Console and add the AlexaSkill Lambda function ARN to the &lt;code&gt;Endpoint&lt;/code&gt; page. Then, click on the &lt;code&gt;Save Endpoints&lt;/code&gt; button. Refer to Step 2, bullet 4. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijdhwywujhax1ha59hn2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijdhwywujhax1ha59hn2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskeotlxp0r8jvbbw3gva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskeotlxp0r8jvbbw3gva.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under Configuration, select &lt;code&gt;Permissions&lt;/code&gt; tab and click on &lt;code&gt;Role name&lt;/code&gt; link to open the IAM role on a new tab.&lt;/li&gt;
&lt;li&gt;Create an inline policy to allow the lambda to invoke other lambda functions. Define the policy as shown below:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Service: Lambda&lt;/li&gt;
&lt;li&gt;Actions: InvokeFunction&lt;/li&gt;
&lt;li&gt;Resources: &lt;code&gt;arn:aws:lambda:us-east-1:123456789012:function:*&lt;/code&gt; (replace the account number and region with your own)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or, you can use the following JSON to create the policy:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "InvokeLambda",
            "Effect": "Allow",
            "Action": "lambda:InvokeFunction",
            "Resource": "arn:aws:lambda:us-east-1:123456789012:function:*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Replace the account number and region with your own&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4zq7a1og6tqnelql1fn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4zq7a1og6tqnelql1fn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Review policy&lt;/code&gt; button, give the policy a name and click on the &lt;code&gt;Create policy&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: you may create a customer managed policy and attach it to the role instead of creating an inline policy. Please, refer to &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  3: Creating SSM document to stop EC2 instances
&lt;/h2&gt;

&lt;p&gt;As previously mentioned, we will create our own SSM document to stop EC2 instances, and we will also use an Amazon managed or built-in SSM document to patch Amazon Linux 2 EC2 instances. By following this strategy, we would cover two case scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customer created SSM document&lt;/li&gt;
&lt;li&gt;Amazon managed SSM document&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reasoning behind this is that we may have our own SSM documents to perform certain tasks. Also, we may need to use Amazon managed or built-in SSM documents to perform other tasks. &lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Customer created SSM document:
&lt;/h4&gt;

&lt;p&gt;The below steps are for creating an SSM document to stop Amazon Linux 2 EC2 instances:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS Systems Manager console and select &lt;code&gt;Documents&lt;/code&gt; from the left hand side menu.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Create document&lt;/code&gt; button and select &lt;code&gt;Command or Session&lt;/code&gt; from the drop-down menu.&lt;/li&gt;
&lt;li&gt;On the &lt;code&gt;Create document&lt;/code&gt; page, enter the following information:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;StopEC2Instances&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Document type: Command document&lt;/li&gt;
&lt;li&gt;Content: YAML&lt;/li&gt;
&lt;li&gt;Content: Copy the YAML text from this &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents/blob/main/files/StopEC2Instances.yaml" rel="noopener noreferrer"&gt;file&lt;/a&gt; and paste it into the &lt;code&gt;Content&lt;/code&gt; field and click on the &lt;code&gt;Create document&lt;/code&gt; button.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the SSM document works on Amazon Linux 2 instances. If you are using a different operating system, you may need to modify the SSM document to work with your operating system.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtd9h2n9vv4scs44yxjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtd9h2n9vv4scs44yxjj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Amazon managed SSM document:
&lt;/h4&gt;

&lt;p&gt;This step is informational only. The Amazon managed document called &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-about-aws-runpatchbaseline.html" rel="noopener noreferrer"&gt;&lt;code&gt;AWS-RunPatchBaseline&lt;/code&gt;&lt;/a&gt; is used to patch EC2 instances. It works on all operating systems (Windows, Linux, and macOS). The document requires parameters to be passed to it. The following parameters are some of the parameters that can be passed to the document, but not all of them are required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operation: The operation to perform. Valid values: Scan, Install&lt;/li&gt;
&lt;li&gt;RebootOption: The reboot option for the instances. Valid values: RebootIfNeeded, NoReboot&lt;/li&gt;
&lt;li&gt;Target selection: The instances to patch. Valid values: InstanceIds, Tags&lt;/li&gt;
&lt;li&gt;Timeout(seconds): The maximum time (in seconds) that the command can run before it is considered to have failed. The default value is 3600 seconds.&lt;/li&gt;
&lt;li&gt;Rate control: The maximum number of instances that are allowed to run the command at the same time. You can specify a number of instances, such as 10, or a percentage of instances, such as 10%. The default value is 50.&lt;/li&gt;
&lt;li&gt;Error threshold: The maximum number of errors allowed before the system stops sending the command to additional targets. &lt;/li&gt;
&lt;li&gt;Output S3 bucket: The S3 bucket where the command execution details are stored.&lt;/li&gt;
&lt;li&gt;SNS topic: The SNS topic where notifications are sent when the command status changes.&lt;/li&gt;
&lt;li&gt;IAM role: The IAM role that allows Systems Manager to send notifications.&lt;/li&gt;
&lt;li&gt;Event notifications: The event notifications that trigger notifications about command status changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following are the parameters that we will pass to the AWS-RunPatchBaseline document via RunDocument:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document name: AWS-RunPatchBaseline&lt;/li&gt;
&lt;li&gt;Document version: $LATEST&lt;/li&gt;
&lt;li&gt;Operation: Install&lt;/li&gt;
&lt;li&gt;RebootOption: RebootIfNeeded&lt;/li&gt;
&lt;li&gt;Target selection: Tags&lt;/li&gt;
&lt;li&gt;Timeout(seconds): 3600&lt;/li&gt;
&lt;li&gt;Rate control: 50&lt;/li&gt;
&lt;li&gt;Error threshold: 0&lt;/li&gt;
&lt;li&gt;SNS topic ARN: SSMCommandNotifications&lt;/li&gt;
&lt;li&gt;IAM role ARN: Systems Manager IAM role&lt;/li&gt;
&lt;li&gt;Event notifications: All&lt;/li&gt;
&lt;li&gt;Event notification Type: Command&lt;/li&gt;
&lt;li&gt;Comment: Alexa - AWS-RunPatchBaseline&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: we will configure the minimum required parameters to run AWS-RunPatchBaseline document. For more information about these parameters, refer to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-about-aws-runpatchbaseline.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  4: Creating SNS topic to receive notifications from Systems Manager - Run Command
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS SNS console and click on the &lt;code&gt;Create topic&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Enter the following information:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Type: Standard&lt;/li&gt;
&lt;li&gt;Topic name: &lt;code&gt;SSMCommandNotifications&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Display name: &lt;code&gt;SSMCommandNotifications&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Create topic&lt;/code&gt; button.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Capture the &lt;code&gt;Topic ARN&lt;/code&gt; from the &lt;code&gt;Topic details&lt;/code&gt; page. We will use this ARN later to add it as environment variable to the lambda function.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: this SNS topic is to receive notifications from Systems Manager - Run Command.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  5: Creating IAM role to allow Systems Manager to send notifications to SNS
&lt;/h2&gt;

&lt;p&gt;This service role is assumed by Systems Manager to publish notifications to the SNS topic when the SSM command status changes. &lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Creating the IAM role:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS IAM console and click on the &lt;code&gt;Create role&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;AWS service&lt;/code&gt; from the &lt;code&gt;Select type of trusted entity&lt;/code&gt; drop-down menu. Then, select &lt;code&gt;Systems Manager&lt;/code&gt; from the &lt;code&gt;Choose a use case for other AWS services&lt;/code&gt; drop-down menu and select &lt;code&gt;Systems Manager&lt;/code&gt; again. Click on the &lt;code&gt;Next: Permissions&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dywqqy4jfgokp2d1xzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dywqqy4jfgokp2d1xzq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Next&lt;/code&gt; button on the &lt;code&gt;Attach permissions&lt;/code&gt; page. We will attach an inline policy to the role in the next step.&lt;/li&gt;
&lt;li&gt;On the &lt;code&gt;Name, review and create&lt;/code&gt; page, give the role a name and click on the &lt;code&gt;Create role&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Step 2: Attaching inline policy to the IAM role:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS IAM console and select &lt;code&gt;Roles&lt;/code&gt; from the left hand side menu.&lt;/li&gt;
&lt;li&gt;Search for the role you created in the previous step and click on the role name.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Add permissions&lt;/code&gt; button and select &lt;code&gt;Create inline policy&lt;/code&gt; from the drop-down menu.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;JSON&lt;/code&gt; tab and paste the following JSON body into the &lt;code&gt;Policy document&lt;/code&gt; field:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VisualEditor0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sns:Publish"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:sns:us-east-1:123456789012:SSMCommandNotifications"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Note: replace the region, account ID, and SNS topic name with your own values. Or, you can use &lt;code&gt;"arn:aws:sns:*:*:*"&lt;/code&gt; to allow the IAM role to send notifications to all SNS topics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Review policy&lt;/code&gt; button and give the policy a name. Then, click on the &lt;code&gt;Create policy&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Capture the IAM role ARN. We will use this ARN later to add it as environment variable to the lambda function.&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  6: Provisioning on-demand DynamoDB table
&lt;/h2&gt;

&lt;p&gt;The DynamoDB table will be used to store the commands, SSM document names and any SSM document parameters that will be used by the Master Lambda function.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Creating DynamoDB table:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS DynamoDB console and click on the &lt;code&gt;Create table&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Enter the following information:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Table name: &lt;code&gt;SSMCommands&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Primary key: &lt;code&gt;Command&lt;/code&gt; (String)&lt;/li&gt;
&lt;li&gt;Table settings: &lt;code&gt;Customize settings&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Table class: &lt;code&gt;DynamoDB Standard&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Read/write capacity settings: &lt;code&gt;On-demand&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Create&lt;/code&gt; button.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Capture the table name. We will add the name as as environment variable to the lambda function.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsuyuy7zudommazm44pod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsuyuy7zudommazm44pod.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Creating DynamoDB table items:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS DynamoDB console and select the &lt;code&gt;SSMCommands&lt;/code&gt; table.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Actions&lt;/code&gt; button and select &lt;code&gt;Create item&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For the &lt;code&gt;Command&lt;/code&gt; value field, enter &lt;code&gt;shutdown&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Alexa service does not support the &lt;code&gt;stop&lt;/code&gt; command. It is a reserved word. Therefore, we will use the &lt;code&gt;shutdown&lt;/code&gt; command to stop the EC2 instances. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Add new attribute&lt;/code&gt; button and select &lt;code&gt;String&lt;/code&gt; from the dropdown menu.&lt;/li&gt;
&lt;li&gt;For the &lt;code&gt;Attribute name&lt;/code&gt; field, enter &lt;code&gt;DocumentName&lt;/code&gt; and for the &lt;code&gt;Value&lt;/code&gt; field, enter &lt;code&gt;StopEC2Instances&lt;/code&gt;. Click on the &lt;code&gt;Create item&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Repeat steps 2-5 and create a new item with the following information:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Command: &lt;code&gt;patch&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;DocumentName: &lt;code&gt;AWS-RunPatchBaseline&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Add new attribute&lt;/code&gt; button and select &lt;code&gt;String&lt;/code&gt; from the dropdown menu. For the &lt;code&gt;Attribute name&lt;/code&gt; field, enter &lt;code&gt;Parameters&lt;/code&gt; and for the &lt;code&gt;Value&lt;/code&gt; field, enter the following JSON:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "Operation":[
      "Install"
   ],
   "RebootOption":[
      "RebootIfNeeded"
   ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Create item&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9b17tnijx2vt1hdg4zs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9b17tnijx2vt1hdg4zs.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0emrfrvjbmv97k31h7ow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0emrfrvjbmv97k31h7ow.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  7: Creating Master Lambda function (MasterLambda) to send SSM commands to Run Command
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Step 1: Creating Lambda function:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS Lambda console and create a new function with the following settings:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Function name: &lt;code&gt;MasterLambda&lt;/code&gt; or any name you prefer&lt;/li&gt;
&lt;li&gt;Runtime: Python 3.9&lt;/li&gt;
&lt;li&gt;Architecture: x86_64&lt;/li&gt;
&lt;li&gt;Role: Create a new role with basic Lambda permissions&lt;/li&gt;
&lt;li&gt;Click Create function&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: we will revisit the lambda function IAM role to add all necessary permissions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Copy the Python source code for the &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents/blob/main/files/MasterLambda.py" rel="noopener noreferrer"&gt; MasterLambda&lt;/a&gt; function and paste it into the &lt;code&gt;Code source&lt;/code&gt; on the Lambda console. Then, click on the &lt;code&gt;Deploy&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Under &lt;code&gt;Configuration&lt;/code&gt; tab, select &lt;code&gt;Permissions&lt;/code&gt; and click on &lt;code&gt;Role name&lt;/code&gt; link to open the IAM role on a new tab.&lt;/li&gt;
&lt;li&gt;Create inline policy and copy the inline policy from this &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents/blob/main/files/inline_policy.json" rel="noopener noreferrer"&gt;JSON file&lt;/a&gt; and paste it into the JSON editor. Replace the account number with your own. Then, click on the &lt;code&gt;Review policy&lt;/code&gt; button. Give the policy a name and click on the &lt;code&gt;Create policy&lt;/code&gt; button. The added policy will allow the lambda function to perform the following actions:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Service: SSM - Actions: SendCommand, ListCommands, DescribeInstanceInformation&lt;/li&gt;
&lt;li&gt;Service: ec2 - Actions: DescribeInstances&lt;/li&gt;
&lt;li&gt;Service: SNS - Actions: Publish&lt;/li&gt;
&lt;li&gt;Service: DynamoDB - Actions: Query&lt;/li&gt;
&lt;li&gt;Service: IAM - Actions: PassRole&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: following AWS best practices and security principles, we are using the least privilege principle to grant the lambda function only the permissions it needs to run and to communicate other AWS services successfully. For more information, refer to &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;code&gt;Configuration&lt;/code&gt; tab, select &lt;code&gt;General configuration&lt;/code&gt; and click on &lt;code&gt;Edit&lt;/code&gt; button. Change the timeout to 30 seconds and click on the &lt;code&gt;Save&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: I have tested 30 seconds timeout and 128 MB memory and they are sufficient for this solution. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;code&gt;Configuration&lt;/code&gt; tab, select &lt;code&gt;Environment variables&lt;/code&gt; and click on &lt;code&gt;Edit&lt;/code&gt; button. Add the following environment variables:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;DynamoDB_Table_Name&lt;/code&gt;: &lt;code&gt;SSMCommands&lt;/code&gt; (replace the DynamoDB table name from the previous step)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SNS_Topic_ARN&lt;/code&gt;: &lt;code&gt;arn:aws:sns:us-east-1:123456789012:SSMCommandNotifications&lt;/code&gt; (replace the SNS topic ARN from the previous step)&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;SSM_Role_ARN&lt;/code&gt;: &lt;code&gt;arn:aws:iam::123456789012:role/SSMCommandRole&lt;/code&gt; (replace the IAM role ARN from the previous step)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The environment variables are used by the Master Lambda function to access the DynamoDB table and to pass on the SNS topic and the IAM role to Systems Manager.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Save&lt;/code&gt; button.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z0l2v22hl843qduf3h5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z0l2v22hl843qduf3h5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Configuring communication between MasterLambda function and AlexaSkill Lambda function:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS Lambda console and select the &lt;code&gt;AlexaSkill&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;Under &lt;code&gt;Configuration&lt;/code&gt; tab, select &lt;code&gt;Environment variables&lt;/code&gt; and click on &lt;code&gt;Edit&lt;/code&gt; button. Add the following environment variables:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;MasterLambdaARN&lt;/code&gt;: &lt;code&gt;arn:aws:lambda:us-east-1:123456789012:function:MasterLambda&lt;/code&gt; (replace the ARN with your own value or copy the Master Lambda function ARN from the previous step)&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  8: Creating Lambda function (SlackLambda) to send notifications to Slack (Optional)
&lt;/h2&gt;
&lt;h4&gt;
  
  
  Step 1: Creating Secrets Manager secret to store Slack webhook URL:
&lt;/h4&gt;

&lt;p&gt;The AWS Secrets Manager is used to store the Slack webhook URL. The webhook URL is a unique URL that is used to send messages to a specific Slack channel. For more information about Slack webhooks, refer to &lt;a href="https://api.slack.com/messaging/webhooks" rel="noopener noreferrer"&gt;Slack documentation&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS Secrets Manager console and click on the &lt;code&gt;Store a new secret&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Enter the following information:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Secret type: &lt;code&gt;Other type of secret&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Key/Value pairs: select &lt;code&gt;Plaintext&lt;/code&gt; and remove everything from the block.&lt;/li&gt;
&lt;li&gt;Paste the full Slack webhook URL and click on the &lt;code&gt;Next&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Secret name: &lt;code&gt;SlackWebhookURL&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Description: &lt;code&gt;Slack webhook URL&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Next&lt;/code&gt; button and &lt;code&gt;Next&lt;/code&gt; again. Then, click on the &lt;code&gt;Store&lt;/code&gt; button.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: we are using AWS Secrets Manager to store the Slack webhook URL. For more information, refer to &lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h9ovw7wwatqpz65syvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h9ovw7wwatqpz65syvn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Creating Slack lambda function:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to AWS Lambda console and create a new function with the following settings:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Function name: &lt;code&gt;SlackLambda&lt;/code&gt; or any name you prefer&lt;/li&gt;
&lt;li&gt;Runtime: Python 3.7&lt;/li&gt;
&lt;li&gt;Architecture: x86_64&lt;/li&gt;
&lt;li&gt;Role: Create a new role with basic Lambda permissions&lt;/li&gt;
&lt;li&gt;Click Create function&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: I have Python 3.7 as a runtime for this function due to the fact that the &lt;code&gt;requests&lt;/code&gt; library is supported as part of the AWS Lambda execution environment in Python 3.7 and below. It means that we don't have to create a deployment package. The &lt;code&gt;request&lt;/code&gt; library is not supported in Python 3.8 and above. For more information, refer to &lt;a href="https://aws.amazon.com/blogs/compute/upcoming-changes-to-the-python-sdk-in-aws-lambda/" rel="noopener noreferrer"&gt;AWS blog&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Copy the Python code for the &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents/blob/main/files/SlackLambda.py" rel="noopener noreferrer"&gt;SlackLambda&lt;/a&gt; function and paste it into the &lt;code&gt;Code source&lt;/code&gt; on the Lambda console. Then, click on the &lt;code&gt;Deploy&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Under &lt;code&gt;Configuration&lt;/code&gt; tab, select &lt;code&gt;Permissions&lt;/code&gt; and click on &lt;code&gt;Role name&lt;/code&gt; link to open the IAM role on a new tab.&lt;/li&gt;
&lt;li&gt;Create inline policy and copy the inline policy from this &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents/blob/main/files/SecretsManager.json" rel="noopener noreferrer"&gt;JSON&lt;/a&gt; and paste it into the JSON editor. Then, click on the &lt;code&gt;Review policy&lt;/code&gt; button. Give the policy a name and click on the &lt;code&gt;Create policy&lt;/code&gt; button. The added policy is to allow the Lambda function to get the Slack webhook URL from Secrets Manager by performing the following action:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Service: secretsmanager - Actions: GetSecretValue&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;code&gt;Configuration&lt;/code&gt; tab, select &lt;code&gt;General configuration&lt;/code&gt; and click on &lt;code&gt;Edit&lt;/code&gt; button. Change the timeout to &lt;code&gt;30&lt;/code&gt; seconds and click on the &lt;code&gt;Save&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;On the &lt;code&gt;SlackLambda.py&lt;/code&gt; file, make sure to update the &lt;code&gt;SLACK_CHANNEL&lt;/code&gt; variable with the Slack channel name. Then, click on the &lt;code&gt;Deploy&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;SLACK_CHANNEL&lt;/code&gt;: the Slack channel name&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;If you are using a different name for the Secrets Manager secret, make sure to update the &lt;code&gt;SLACK_HOOK_URL&lt;/code&gt; variable with the name of the secret for the Slack URL. Then, click on the &lt;code&gt;Deploy&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;SLACK_HOOK_URL&lt;/code&gt; = boto3.client('secretsmanager').get_secret_value(SecretId='SlackWebhookURL')['SecretString']&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replace &lt;code&gt;SecretId&lt;/code&gt; with your secret name.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Step 3: Subscribing SlackLambda function to the SNS topic:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;From the AWS Lambda console, select the &lt;code&gt;SlackLambda&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;Under &lt;code&gt;Function overview&lt;/code&gt;, click on the &lt;code&gt;Add trigger&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;SNS&lt;/code&gt; and again select &lt;code&gt;SSMCommandNotifications&lt;/code&gt; topic. Then, click on the &lt;code&gt;Add&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the &lt;code&gt;SSMCommandNotifications&lt;/code&gt; topic is the SNS topic that we created in the previous section. If navigate to the SNS console, you will see that the &lt;code&gt;SlackLambda&lt;/code&gt; function is subscribed to the &lt;code&gt;SSMCommandNotifications&lt;/code&gt; topic.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  9: Testing Alexa skill solution
&lt;/h2&gt;

&lt;p&gt;Now, we have configured the Alexa skill solution, we are ready to test it. We will spin up two Amazon Linux 2 EC2 instances and then run the &lt;code&gt;shutdown&lt;/code&gt; command to stop the first instance on Alexa Developer Console simulator. Then, we will run the &lt;code&gt;patch&lt;/code&gt; command to patch the second instance. However, we need to tag the instances with the key-value pair first. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance&lt;/th&gt;
&lt;th&gt;Tag Key&lt;/th&gt;
&lt;th&gt;Tag Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Instance 1&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Alexa&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;testing&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Instance 2&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Alexa&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;patching&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the tags are case sensitive. The &lt;code&gt;A&lt;/code&gt; in Alexa has to be capitalized and &lt;code&gt;testing&lt;/code&gt; and &lt;code&gt;patching&lt;/code&gt; have to be lower case. These are the commands and tags that we have defined in the Alexa Service. If you want to use different commands and tags, you will need to update the Alexa Service manually or update the Interaction Model JSON file. &lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  Step 1: Tagging EC2 instances:
&lt;/h4&gt;

&lt;p&gt;During the provisioning of the two EC2 instances, add the above tags to the instances accordingly. Alternatively, you can add the tags to the instances after provisioning them. However, it might take a few minutes for Systems Manager to detect the tags.&lt;/p&gt;

&lt;p&gt;How to add tags to the EC2 instances after provisioning them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the EC2 console and select the first instance.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Tags&lt;/code&gt; tab and click on the &lt;code&gt;Add/Edit tags&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Add the following tags:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Alexa&lt;/td&gt;
&lt;td&gt;testing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Repeat the same steps for the second instance and add the following tags:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Alexa&lt;/td&gt;
&lt;td&gt;patching&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the EC2 instances should be managed by Systems Manager. To confirm that, navigate to the Systems Manager console and under &lt;code&gt;Node Management&lt;/code&gt;, select &lt;code&gt;Fleet Manager&lt;/code&gt;. You should see the two instances listed there.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  Step 2: Testing Alexa skill:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Alexa Developer Console and click on the name of the skill, &lt;code&gt;Command Control&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Test&lt;/code&gt; tab and toggle the &lt;code&gt;Skill testing is enabled in&lt;/code&gt; to &lt;code&gt;Development&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkv6bdpqa3pq0j5h1e09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkv6bdpqa3pq0j5h1e09.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Next to the microphone icon, type &lt;code&gt;open command control&lt;/code&gt; and click on the &lt;code&gt;Enter&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Type &lt;code&gt;shutdown&lt;/code&gt; and &lt;code&gt;testing&lt;/code&gt; as shown below and click on the &lt;code&gt;Enter&lt;/code&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw4sn7mqnw7s4d954t9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw4sn7mqnw7s4d954t9a.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have not tagged the EC2 instance with the key-value pair &lt;code&gt;Alexa&lt;/code&gt; and &lt;code&gt;testing&lt;/code&gt;, the instance is not running or the instance is not managed by Systems Manager, you will get the following error message:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I couldn't find any running instances tagged with testing. You can run a different command or say cancel to exit.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;If you have tagged the EC2 instance with the key-value pair &lt;code&gt;Alexa&lt;/code&gt; and &lt;code&gt;testing&lt;/code&gt;, the instance is running and the instance is managed by Systems Manager, you will get the following message:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I have sent command shutdown to 1 instance tagged with testing and its current status is Pending. You will receive a Slack notification when the command starts and completes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I had not tagged the EC2 instance with the key-value pair &lt;code&gt;Alexa&lt;/code&gt; and &lt;code&gt;testing&lt;/code&gt; and received a message by Alexa stating that there is no running instance tagged with &lt;code&gt;testing&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;Then, I tagged the EC2 instance with the key-value pair &lt;code&gt;Alexa&lt;/code&gt; and &lt;code&gt;testing&lt;/code&gt; and waited for a few minutes for Systems Manager to detect the tags. &lt;/li&gt;
&lt;li&gt;Then, I ran the &lt;code&gt;shutdown&lt;/code&gt; command and received a message by Alexa stating that the command has been sent to the instance. It takes the instance one minute to shutdown according to our &lt;code&gt;StopEC2Instances&lt;/code&gt; SSM document as per design. &lt;/li&gt;
&lt;li&gt;I also received a Slack notification stating that the command has been sent to the instance and the command status is &lt;code&gt;InProgress&lt;/code&gt; and then &lt;code&gt;Success&lt;/code&gt; when the command completed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzycaonrf5iry23s6pt7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzycaonrf5iry23s6pt7l.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now, let's run the &lt;code&gt;patch&lt;/code&gt; command to patch the second instance. Type &lt;code&gt;patch&lt;/code&gt; and &lt;code&gt;patching&lt;/code&gt; and click enter. Since we have spun up the instance recently, there are no patches to install and it would not take long to complete the patching. AWS is doing a great job in keeping the Amazon Linux 2 up to date. We can follow the command status or history in the Systems Manager console. We should receive Slack notifications when the command starts and completes, if we have configured SlackLambda function and Slack webhook URL correctly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jl5jiai5prbnifvydo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jl5jiai5prbnifvydo3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refer to the GitHub repo to see the video.&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/OmarCloud20" rel="noopener noreferrer"&gt;
        OmarCloud20
      &lt;/a&gt; / &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents" rel="noopener noreferrer"&gt;
        alexa-runs-ssm-documents
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Alexa to Run Systems Manager Documents&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/OmarCloud20/alexa-runs-ssm-documentsimgs/Alexa_2_4000.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FOmarCloud20%2Falexa-runs-ssm-documentsimgs%2FAlexa_2_4000.png" alt="Alexa to Run Systems Manager Documents"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Introduction:&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Alexa is Amazon's cloud-based voice service that powers hundreds of millions of devices. It also enables developers to build &lt;a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit" rel="nofollow noopener noreferrer"&gt;skills&lt;/a&gt;, which are like applications for Alexa. The Alexa skill is a cloud-based solution that provides the logic and functionality to perform certain tasks using voice commands. The skill is hosted on AWS Lambda and is built using &lt;a href="https://developer.amazon.com/en-US/docs/alexa/sdk/alexa-skills-kit-sdks.html" rel="nofollow noopener noreferrer"&gt;Alexa Skills Kit&lt;/a&gt; SDK (ASK) framework. The communication between &lt;a href="https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/get-started-with-alexa-voice-service.html" rel="nofollow noopener noreferrer"&gt;Alexa service&lt;/a&gt; and the Lambda function hosting the Alexa skill is &lt;a href="https://developer.amazon.com/en-US/docs/alexa/custom-skills/host-a-custom-skill-as-an-aws-lambda-function.html#:~:text=continuously%20run%20servers.-,Alexa%20encrypts,-its%20communications%20with" rel="nofollow noopener noreferrer"&gt;encrypted&lt;/a&gt; and the access permissions to the Lambda function are protected by AWS Identity and Access Management (IAM) policies. Therefore, we can be confident that Alexa skills are secure.&lt;/p&gt;

&lt;p&gt;This step-by-step tutorial walks you through the process of developing an Alexa cloud-based solution. We will build our Alexa skill using Alexa Skills Kit SDK (ASK) for Python that will allow us to run &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html" rel="nofollow noopener noreferrer"&gt;AWS Systems&lt;/a&gt;…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/OmarCloud20/alexa-runs-ssm-documents" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;&lt;strong&gt;Important Notes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the simulator sits idle for 5 minutes or so, it will time out and you will need to refresh the page and start over.&lt;/li&gt;
&lt;li&gt;To troubleshoot any issues, you can check the CloudWatch logs for the &lt;code&gt;MasterLambda&lt;/code&gt; and &lt;code&gt;AlexaSkills&lt;/code&gt; Lambda functions.&lt;/li&gt;
&lt;li&gt;You can also check the Systems Manager console to see the command status and the command output by going to the &lt;code&gt;Run Command&lt;/code&gt; section and selecting the &lt;code&gt;Command history&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You can enable &lt;code&gt;Your Skills&lt;/code&gt; in development mode on Alexa app on your mobile phone and test the skill on your mobile phone.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations on completing this tutorial! We have learned how to build a complete Alexa skill solution that can run SSM documents on SSM managed EC2 instances. The journey started with creating an Alexa skill hosted on an AWS Lambda function. Then we created a Master Lambda function that is triggered by the Alexa skill Lambda function. The Master Lambda function calls Systems Manager API using Boto3 SDK for Python to run SSM documents on SSM managed EC2 instances. We also created a Slack Lambda function that is subscribed to an SNS topic to send neat Slack notifications when the SSM command starts and completes. We have also learned how to use the Alexa Developer Console to test the Alexa skill.&lt;/p&gt;

&lt;p&gt;Yes, it's time to celebrate! 🎉🎉🎉 &lt;/p&gt;

&lt;p&gt;While you are celebrating, you can also think about how you can extend or improve this solution. Maybe you can add more SSM documents and more Alexa commands, or maybe you can learn more about Alexa utterances and intents. How about we take this solution to the next level and build an Observability solution? Maybe we can use the Alexa service with Grafana or CloudWatch to monitor metrics of our environments. That's another tutorial for another day 😉&lt;/p&gt;

&lt;p&gt;The possibilities are endless and learning is a lifelong journey. Innovation is the key to success, so keep learning and keep innovating!&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__516388"&gt;
    &lt;a href="/omarcloud20" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F516388%2Fd8b3ac0d-bace-46f6-aea3-010dd492d0ab.jpeg" alt="omarcloud20 image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/omarcloud20"&gt;Omar Omar&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/omarcloud20"&gt;I’m scalable, highly available and reliable engineer. I strongly believe in education and hands-on experience. Learning is a lifelong journey and innovation is the key to success.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.amazon.com/de-DE/alexa/alexa-skills-kit/tutorials" rel="noopener noreferrer"&gt;Alexa Skills Kit - Tutorials&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.amazon.com/en-US/docs/alexa/alexa-skills-kit-sdk-for-python/overview.html" rel="noopener noreferrer"&gt;Alexa Skills Kit SDK for Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/systems-manager/" rel="noopener noreferrer"&gt;AWS Systems Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/iam/" rel="noopener noreferrer"&gt;AWS IAM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;AWS S3&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;





&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/OmarCloud20" rel="noopener noreferrer"&gt;
        OmarCloud20
      &lt;/a&gt; / &lt;a href="https://github.com/OmarCloud20/alexa-runs-ssm-documents" rel="noopener noreferrer"&gt;
        alexa-runs-ssm-documents
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Alexa to Run Systems Manager Documents&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/OmarCloud20/alexa-runs-ssm-documentsimgs/Alexa_2_4000.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FOmarCloud20%2Falexa-runs-ssm-documentsimgs%2FAlexa_2_4000.png" alt="Alexa to Run Systems Manager Documents"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Introduction:&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Alexa is Amazon's cloud-based voice service that powers hundreds of millions of devices. It also enables developers to build &lt;a href="https://developer.amazon.com/en-US/alexa/alexa-skills-kit" rel="nofollow noopener noreferrer"&gt;skills&lt;/a&gt;, which are like applications for Alexa. The Alexa skill is a cloud-based solution that provides the logic and functionality to perform certain tasks using voice commands. The skill is hosted on AWS Lambda and is built using &lt;a href="https://developer.amazon.com/en-US/docs/alexa/sdk/alexa-skills-kit-sdks.html" rel="nofollow noopener noreferrer"&gt;Alexa Skills Kit&lt;/a&gt; SDK (ASK) framework. The communication between &lt;a href="https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/get-started-with-alexa-voice-service.html" rel="nofollow noopener noreferrer"&gt;Alexa service&lt;/a&gt; and the Lambda function hosting the Alexa skill is &lt;a href="https://developer.amazon.com/en-US/docs/alexa/custom-skills/host-a-custom-skill-as-an-aws-lambda-function.html#:~:text=continuously%20run%20servers.-,Alexa%20encrypts,-its%20communications%20with" rel="nofollow noopener noreferrer"&gt;encrypted&lt;/a&gt; and the access permissions to the Lambda function are protected by AWS Identity and Access Management (IAM) policies. Therefore, we can be confident that Alexa skills are secure.&lt;/p&gt;
&lt;p&gt;This step-by-step tutorial walks you through the process of developing an Alexa cloud-based solution. We will build our Alexa skill using Alexa Skills Kit SDK (ASK) for Python that will allow us to run &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-ssm-docs.html" rel="nofollow noopener noreferrer"&gt;AWS Systems&lt;/a&gt;…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/OmarCloud20/alexa-runs-ssm-documents" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>aws</category>
      <category>alexa</category>
    </item>
    <item>
      <title>CDK for Terraform (CDKTF) on AWS: How to Configure an S3 Remote Backend and Deploy a Lambda Function URL using Python</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Tue, 27 Sep 2022 13:18:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/cdk-for-terraform-cdktf-on-aws-how-to-configure-an-s3-remote-backend-and-deploy-a-lambda-function-url-using-python-3okk</link>
      <guid>https://dev.to/aws-builders/cdk-for-terraform-cdktf-on-aws-how-to-configure-an-s3-remote-backend-and-deploy-a-lambda-function-url-using-python-3okk</guid>
      <description>&lt;h4&gt;
  
  
  Update (10/12/22):
&lt;/h4&gt;

&lt;p&gt;On the 3rd of Oct, 2022, HashiCorp released &lt;a href="https://github.com/hashicorp/terraform-cdk/issues/2160" rel="noopener noreferrer"&gt;CDKTF version 0.13&lt;/a&gt; introducing breaking changes such as namespaces. Due to this latest major upgrade, this tutorial does not work with CDKTF version 0.13. But it still works with CDKTF version 0.12. I will update the tutorial to work with version 0.13 in the near future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Table of Contents
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Step 1: Required Prerequisites&lt;/li&gt;
&lt;li&gt;Step 2: Initializing First CDKTF Project using Python Template&lt;/li&gt;
&lt;li&gt;
Step 3: Configuring an S3 Remote Backend

&lt;ul&gt;
&lt;li&gt;Option 1: Utilize an existing S3 bucket and DynamoDB to configure the S3 Remote Backend&lt;/li&gt;
&lt;li&gt;Option 2: Create an S3 Bucket and DynamoDB Table for the S3 Remote Backend using CDKTF&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Step 4: Learn How to Use Construct Hub and AWS Provider Submodules

&lt;ul&gt;
&lt;li&gt;Scenario 1: S3 Bucket&lt;/li&gt;
&lt;li&gt;Scenario 2: ECS Cluster&lt;/li&gt;
&lt;li&gt;CDKTF Commands&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Step 5: Deploying a Lambda Function URL using CDKTF&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;After two years of development, on the 1st of August 2022, HashiCorp announced the general availability of &lt;a href="https://www.hashicorp.com/blog/cdk-for-terraform-now-generally-available" rel="noopener noreferrer"&gt;CDK for Terraform&lt;/a&gt;. As the CDKTF framework finally saw the light of day, the news triggered lots excitement among the community. The CDKTF framework is a new open-source project that enables developers to use their favorite programming languages to define and provision cloud infrastructure resources on AWS. Under the hood, it converts the programming language definitions into Terraform configuration files and uses Terraform to provision the resources. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Learning is no longer a stage of life; it’s a lifelong journey" - Andy Bird&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's often said that the best way to learn something new is to do it, and the first milestone in learning is often the hardest. However, with the right guidance, you can overcome the challenges and achieve your goals. My objective in this article is to guide you through the process of installing, configuring, and deploying your first AWS resource using CDKTF on AWS. In addition, I will also show you how to use the documentation on Construct Hub to deploy your own AWS resources.&lt;/p&gt;

&lt;p&gt;The tutorial should be easy to follow and understand for beginners and intermediate users. The tutorial is devoted to developers with adequate AWS, Terraform and Python knowledge but who are unsure of how and where to begin their CDKTF learning. &lt;/p&gt;

&lt;p&gt;The main topics covered in this tutorial are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Proper installation and configuration of all required prerequisites.&lt;/li&gt;
&lt;li&gt;Installing and configuring CDKTF.&lt;/li&gt;
&lt;li&gt;Initializing first CDKTF project using Python template and local backend.&lt;/li&gt;
&lt;li&gt;Deploying an S3 bucket, DynamoDB table and configuring an S3 remote backend.&lt;/li&gt;
&lt;li&gt;Learning how to read/use AWS Provider Submodules and Construct Hub documentation&lt;/li&gt;
&lt;li&gt;Provisioning an S3 Bucket using CDKTF&lt;/li&gt;
&lt;li&gt;Provisioning an IAM role and Lambda Function using CDKTF&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This tutorial is also available on my GitHub - &lt;a href="https://github.com/OmarCloud20/CDKTF-Tutorial" rel="noopener noreferrer"&gt;CDKTF-Tutorial&lt;/a&gt; repo. By the end of this tutorial, you shall be comfortable deploying AWS resources using the CDKTF framework. I will pass you the learning baton and you can take it from there.&lt;br&gt;
Enough said, let's get started.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Required Prerequisites
&lt;/h2&gt;

&lt;p&gt;To complete this tutorial successfully, we should install and configure the following prerequisites properly. To set you up for success, I have to ensure you have the following prerequisites installed and configured properly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS CLI version 2: &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Follow AWS &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-prereqs.html" rel="noopener noreferrer"&gt;Prerequisites to use the AWS CLI version 2&lt;/a&gt; documentation for all required prerequisites.&lt;/li&gt;
&lt;li&gt;Follow &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;Installing or updating the latest version of the AWS CLI&lt;/a&gt; document to install the AWS CLI v2 as per your local device architecture. In my case, it's an Ubuntu 20.04 LTS OS running on a Linux x86 (64-bit) architecture. &lt;/li&gt;
&lt;li&gt;Lastly, follow the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#:~:text=as%20programmatically%20useful.-,Profiles,-A%20collection%20of" rel="noopener noreferrer"&gt;Configuration basics - Profiles&lt;/a&gt; document to configure the AWS CLI v2 and create a &lt;code&gt;named profile&lt;/code&gt; (name it CDKFT as shown below). We will use the named profile to configure AWS Provider credentials later on.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws configure --profile CDKTF


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the profile name does not have to be CDKTF, you can name it however you like. However, I will be using CDKTF in this tutorial.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; (version v1.0+ as per Terraform's recommendation).&lt;/li&gt;
&lt;li&gt;Node.js and npm v16+:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Follow &lt;a href="https://github.com/nodesource/distributions/blob/master/README.md#:~:text=install%20%2Dy%20nodejs-,Node.js%20v16.x%3A,-%23%20Using%20Ubuntu" rel="noopener noreferrer"&gt;NodeSource Node.js Binary Distributions&lt;/a&gt; to install Node.js v16.x for Ubuntu (the installation includes npm). For other operating systems and architectures, refer to &lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; official page. Once you have Node.js installed, make sure it's version 16.x by running the below command:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

node -v


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Python 3.7+: Ubuntu 20.04 LTS distribution comes with &lt;a href="https://packages.ubuntu.com/search?keywords=python3&amp;amp;searchon=names&amp;amp;suite=focal&amp;amp;section=all" rel="noopener noreferrer"&gt;Python 3.8.2&lt;/a&gt; pre-installed by default. Run the below command to confirm:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

python3 --version


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Additionally, we will need to make sure we have &lt;code&gt;pip&lt;/code&gt; installed and available. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pip3 --version


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if &lt;code&gt;pip&lt;/code&gt; is unavailable, run &lt;code&gt;sudo apt install python3-pip&lt;/code&gt; to install it as per &lt;a href="https://packaging.python.org/en/latest/guides/installing-using-linux-tools/#debian-ubuntu" rel="noopener noreferrer"&gt;Python&lt;/a&gt; documentation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/pipenv/" rel="noopener noreferrer"&gt;pipenv&lt;/a&gt; v2021.5.29+: as of Sept 22, 2022, the latest version of &lt;code&gt;pipenv&lt;/code&gt; is 2022.9.20. We will use &lt;code&gt;pip&lt;/code&gt; to install &lt;code&gt;pipenv&lt;/code&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pip3 install pipenv


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if you receive a &lt;code&gt;WARNING&lt;/code&gt; stating &lt;code&gt;pipenv&lt;/code&gt; is not installed on &lt;code&gt;PATH&lt;/code&gt; as shown on the below image, run the below command to add it to the path:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export PATH="the-path-mentioned-in-the-warning:$PATH"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Actual example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export PATH="/home/CDKTF/.local/bin:$PATH"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdhq3eeg4w9nz1z9px6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdhq3eeg4w9nz1z9px6x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have to confirm that &lt;code&gt;pipenv&lt;/code&gt; is installed and available before we proceed. Run the below command to confirm:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pipenv --version


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: it's tempting to install &lt;code&gt;pipenv&lt;/code&gt; by using the package manager &lt;code&gt;sudo apt install pipenv&lt;/code&gt;, but be aware that the system repository version of &lt;code&gt;pipenv&lt;/code&gt; is outdated and will not work with the CDKTF framework.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;CDKFT: now, we are ready to install the &lt;code&gt;CDKTF&lt;/code&gt; using &lt;code&gt;npm&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

npm install --global cdktf-cli@0.12.3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you receive a permission denied error, use &lt;code&gt;sudo&lt;/code&gt; as shown below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo npm install --global cdktf-cli@0.12.3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's confirm the version:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cdktf --version


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Make sure you have all required prerequisite versions installed and configured successfully as shown on the below image:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fius8597hqf7yphxrqm06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fius8597hqf7yphxrqm06.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reaching this point means you have all required prerequisites installed and configured properly. We are ready to move on to the next step. This is a milestone, so take a moment to celebrate it. You deserve it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Initializing the First CDKTF Project using Python Template
&lt;/h2&gt;

&lt;p&gt;In this section, we will learn how to use CDKTF CLI commands to create our first AWS CDKTF Python project. We will use the &lt;code&gt;cdktf init&lt;/code&gt; command to create a new project in the current directory. The command also creates a &lt;code&gt;cdktf.json&lt;/code&gt; file that contains the project configuration. The &lt;code&gt;cdktf.json&lt;/code&gt; file contains e.g. the programming language, and the Terraform provider. The &lt;code&gt;cdktf.json&lt;/code&gt; file is used by the &lt;code&gt;cdktf&lt;/code&gt; command to determine the project configuration. Follow the below steps to initialize the first CDKTF project using Python template:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new directory for the project and &lt;code&gt;cd&lt;/code&gt; into the directory. I will create a directory on my Desktop and name it &lt;code&gt;first_project&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mkdir first_project &amp;amp;&amp;amp; cd first_project


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Run the following command to initialize our first CDKTF project. We will be promoted to enter the following information:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A. Name of the project: leave it as &lt;code&gt;first_project&lt;/code&gt; and hit &lt;code&gt;Enter&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;B. Project description: leave it as &lt;code&gt;My first CDKTF project&lt;/code&gt; and hit &lt;code&gt;Enter&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;C. Send crash reports to CDKTF team. I would highly recommend you say &lt;code&gt;yes&lt;/code&gt; to send crash reports to the CDKTF team. This will help improve the product and fix bugs. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cdktf init --template="python" --local


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note 1: we are using the &lt;code&gt;--local&lt;/code&gt; flag to store our Terraform state file locally. We will reconfigure the backend to S3 Remote backend in the next section.&lt;/p&gt;

&lt;p&gt;Note 2: if you receive error &lt;code&gt;[ModuleNotFoundError: No module named 'virtualenv.seed.via_app_data']&lt;/code&gt;, you would need to remove &lt;code&gt;virtualenv&lt;/code&gt; by running &lt;code&gt;sudo apt remove  python3-virtualenv&lt;/code&gt;. We should still have &lt;code&gt;virtualenv&lt;/code&gt; as part of the pip packages. Run &lt;code&gt;pip3 show virtualenv&lt;/code&gt; to confirm. If you don't see &lt;code&gt;virtualenv&lt;/code&gt; in the list, run &lt;code&gt;pip3 install virtualenv&lt;/code&gt; to install it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Activate the project's virtual environment (optional but recommended):&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pipenv shell


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the purpose of creating a virtual environment is to isolate the project and and all its packages and dependencies from the host or local device. It's a self contained environment within the host to prevent polluting the system. It's highly recommended to activate it to keep your host healthy and clean. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Install AWS provider. There are multiple ways to install &lt;a href="https://constructs.dev/packages/@cdktf/provider-aws/v/9.0.36?lang=python" rel="noopener noreferrer"&gt;AWS Provider&lt;/a&gt;. We will use &lt;code&gt;pipenv&lt;/code&gt; to install the AWS provider. Run the below command to install the AWS provider:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pipenv install cdktf-cdktf-provider-aws


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: as of the 26th of Sept, 2022, if you decide to install the AWS Provider using &lt;code&gt;cdktf provider add "aws@~&amp;gt;4.0"&lt;/code&gt;, the installation will fail due to &lt;a href="https://github.com/hashicorp/cdktf-provider-aws/issues/749" rel="noopener noreferrer"&gt;no matching distribution found for version v9.0.36&lt;/a&gt;. There are other methods of importing a provider but this tutorial won't discuss to focus on simplicity. &lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;a href="https://asciinema.org/a/523196" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fasciinema.org%2Fa%2F523196.svg" alt="asciicast"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, you have successfully initialized your first CDKTF Python project. This is another milestone to celebrate. Get a cup of coffee ☕️ and let's move on to the next section.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Configuring an S3 Remote Backend
&lt;/h2&gt;

&lt;p&gt;Terraform stores all managed infrastructure and configuration by default in a file named &lt;code&gt;terraform.tfstate&lt;/code&gt;. If a local backend is configured for the project, the state file is stored in the current working directly. However, when working in a team environment to collaborate with other team members, it is important to configure a remote backend. There are several remote backend options such as, consul, etcd, gcs, http and s3. For a full list of remote backends, refer to &lt;a href="https://www.terraform.io/language/settings/backends/configuration#available-backends" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; documentation. &lt;/p&gt;

&lt;p&gt;For this tutorial, we will configure an &lt;a href="https://www.terraform.io/language/settings/backends/s3" rel="noopener noreferrer"&gt;S3 Remote Backend&lt;/a&gt; which includes an S3 bucket for storing the state file and a DynamoDB table for state locking and consistency checking. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select one of the following options to configure the S3 Remote Backend:&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1: Utilize an existing S3 bucket and DynamoDB to configure the S3 Remote Backend.
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;From Step 2, while we still have the virtual environment activated, let's open the project directory using our choice of an Integrated Development Environment (IDE). In my case, I'm using &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt; which I downloaded from the &lt;code&gt;Ubuntu Software Store&lt;/code&gt;. Run &lt;code&gt;code .&lt;/code&gt; on the terminal to open the project directory via VS Code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;code&gt;main.py&lt;/code&gt; file and add the AWS provider construct to the imports section:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from cdktf_cdktf_provider_aws import AwsProvider


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the final &lt;code&gt;main.py&lt;/code&gt; code will be provided at the end of this section.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Configure the AWS provider by adding the following code to &lt;code&gt;MyStack&lt;/code&gt; class:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

AwsProvider(self, "AWS", region="us-east-1", profile="CDKTF")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: we are using the &lt;code&gt;profile&lt;/code&gt; attribute to specify the AWS profile to use. This is the AWS CLI profile we have previously discussed and created in the Required Prerequisites section. If you don't have a profile created, you can remove the &lt;code&gt;profile&lt;/code&gt; attribute and the AWS provider will use the default profile. Or, you can use a different authentication method as per the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;AWS Provider&lt;/a&gt; documentation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Add &lt;code&gt;S3Backend&lt;/code&gt; class to the other imported classes. The S3Backend class is employing the S3 bucket and DynamoDB table as an S3 remote backend.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from cdktf import App, TerraformStack, S3Backend


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Add the S3 Backend construct to the &lt;code&gt;main.py&lt;/code&gt; file. We will add the following S3 Backend configurations to the &lt;code&gt;MyStack&lt;/code&gt; class:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;bucket&lt;/code&gt; - the name of the S3 bucket to store the state file. The bucket must exist and be in the same region as the stack. If the bucket doesn't exist, the stack will fail to deploy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;key&lt;/code&gt; - the name of the state file and its path. The default value is &lt;code&gt;terraform.tfstate&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;encrypt&lt;/code&gt; - whether to encrypt the state file using server-side encryption with AWS KMS. The default value is &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;region&lt;/code&gt; - the region of the S3 bucket and DynamoDB table. The default value is &lt;code&gt;us-east-1&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;dynamodb_table&lt;/code&gt; - the name of the DynamoDB table to use for state locking and consistency checking. The table must exist and be in the same region as the stack. If the table doesn't exist, the stack will fail to deploy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;profile&lt;/code&gt; - the AWS CLI profile to use. The default value is &lt;code&gt;default&lt;/code&gt;. But, we have already configured the AWS provider to use the &lt;code&gt;CDKTF&lt;/code&gt; profile. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is how the S3 Backend construct will look like:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

        S3Backend(self,
        bucket="cdktf-remote-backend",
        key="first_project/terraform.tfstate",
        encrypt=True,
        region="us-east-1",
        dynamodb_table="cdktf-remote-backend-lock",
        profile="CDKTF",
        )


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The final &lt;code&gt;main.py&lt;/code&gt; file should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from constructs import Construct
from cdktf import App, TerraformStack, S3Backend
from cdktf_cdktf_provider_aws import AwsProvider

class MyStack(TerraformStack):
    def __init__(self, scope: Construct, ns: str):
        super().__init__(scope, ns)

        AwsProvider(self, "AWS", region="us-east-1", profile="CDKTF")

        S3Backend(self,
        bucket="cdktf-remote-backend",
        key="first_project/terraform.tfstate",
        encrypt=True,
        region="us-east-1",
        dynamodb_table="cdktf-remote-backend-lock",
        profile="CDKTF",
        )

        # define resources here


app = App()
MyStack(app, "first_project")

app.synth()


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d3hl7w3mi9monq4zfpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d3hl7w3mi9monq4zfpp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;cdktf synth&lt;/code&gt; to generate the Terraform configuration files. The Terraform configuration files will be generated in the &lt;code&gt;cdktf.out&lt;/code&gt; directory. &lt;del&gt;The &lt;code&gt;synth&lt;/code&gt; command will fail if the S3 bucket and DynamoDB table don't exist.&lt;/del&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cdktf synth


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You have successfully configured the S3 Remote Backend. Let's move on to the next section.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To test the configuration of the S3 remote backend, follow the below steps to create an S3 bucket resource and deploy the stack. &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Import the S3 bucket library to the &lt;code&gt;main.py&lt;/code&gt; file:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from cdktf_cdktf_provider_aws import AwsProvider, s3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Add the S3 bucket resource to the &lt;code&gt;MyStack&lt;/code&gt; class:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

        my_bucket = s3.S3Bucket(self, "my_bucket",
        bucket="Name-of-the-bucket",
        )


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Replace &lt;code&gt;Name-of-the-bucket&lt;/code&gt; with the name of the bucket you want to create. Note, S3 bucket names must be unique across all existing bucket names in Amazon S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;cdktf deploy&lt;/code&gt; to deploy the stack and create the S3 bucket. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cdktf deploy


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://asciinema.org/a/523571" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fasciinema.org%2Fa%2F523571.svg" alt="asciicast"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if you get &lt;code&gt;Incomplete lock file information for providers&lt;/code&gt; warning, you can either ignore it or you can run &lt;code&gt;terraform providers lock -platform=linux_amd64&lt;/code&gt; from the project root directory to validate the lock file. For more information, refer to &lt;a href="https://www.terraform.io/docs/language/providers/requirements.html#provider-locks" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; documentation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Congratulations, you have successfully configured an S3 remote backend and created an S3 bucket using CDKTF. It's time to take a break and stretch your legs. &lt;/p&gt;




&lt;h3&gt;
  
  
  Option 2: Create an S3 Bucket and DynamoDB Table for the S3 Remote Backend using CDKTF
&lt;/h3&gt;

&lt;p&gt;For this option, we will take a different approach. We will create the S3 bucket and DynamoDB table using CDKTF and then configure the S3 remote backend to use the newly created resources. Follow the below steps to create the S3 bucket and DynamoDB table:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the project directory using your preferred IDE. If you are using VS Code, &lt;code&gt;cd&lt;/code&gt; into the project folder and run &lt;code&gt;code .&lt;/code&gt; on the terminal to open the project directory using VS Code. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;code&gt;main.py&lt;/code&gt; file and replace the default code with the following. This code creates an S3 bucket and DynamoDB table for the S3 remote backend. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from constructs import Construct
from cdktf import App, TerraformStack, S3Backend
from cdktf_cdktf_provider_aws import AwsProvider, s3, dynamodb

class MyStack(TerraformStack):
    def __init__(self, scope: Construct, ns: str):
        super().__init__(scope, ns)

        AwsProvider(self, "AWS", region="us-east-1", profile="CDKTF")

        # define resources here
        s3_backend_bucket = s3.S3Bucket(self,
        "s3_backend_bucket",
        bucket="cdktf-remote-backend-2",
        )
        dynamodb_lock_table = dynamodb.DynamodbTable(self, 
        "dynamodb_lock_table",
        name="cdktf-remote-backend-lock-2",
        billing_mode="PAY_PER_REQUEST",
        attribute=[
            {
                "name": "LockID",
                "type": "S"
            }
        ],
        hash_key="LockID",
        )



app = App()
MyStack(app, "first_project")

app.synth()


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note, if the S3 bucket and DynamoDB table already exist, an error will be thrown. The S3 bucket names must be globally unique across all existing bucket names in Amazon S3 and DynamoDB table names must be unique within an AWS account. If you get an error, you can change the bucket and table names to unique names.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4zaqlkmty78sxxlvcvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4zaqlkmty78sxxlvcvg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;cdktf deploy&lt;/code&gt; to deploy the stack and create the S3 bucket and DynamoDB table.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cdktf deploy


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://asciinema.org/a/523572" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fasciinema.org%2Fa%2F523572.svg" alt="asciicast"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if you get &lt;code&gt;Incomplete lock file information for providers&lt;/code&gt; warning, you can either ignore it or you can run &lt;code&gt;terraform providers lock -platform=linux_amd64&lt;/code&gt; from the project root directory to validate the lock file. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Now, we will configure the S3 remote backend to use the newly created S3 bucket and DynamoDB table. Open the &lt;code&gt;main.py&lt;/code&gt; file and replace the code with the following. This code configures the S3 remote backend to use the newly created S3 bucket and DynamoDB table. Make sure to replace the &lt;code&gt;bucket&lt;/code&gt; and &lt;code&gt;dynamodb_table&lt;/code&gt; values with the names of the S3 bucket and DynamoDB table you created in the previous step.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from constructs import Construct
from cdktf import App, TerraformStack, S3Backend
from cdktf_cdktf_provider_aws import AwsProvider, s3, dynamodb

class MyStack(TerraformStack):
    def __init__(self, scope: Construct, ns: str):
        super().__init__(scope, ns)

        AwsProvider(self, "AWS", region="us-east-1", profile="CDKTF")

        #S3 Remote Backend
        S3Backend(self,
        bucket="cdktf-remote-backend-2",
        key="first_project/terraform.tfstate",
        encrypt=True,
        region="us-east-1",
        dynamodb_table="cdktf-remote-backend-lock-2",
        profile="CDKTF",
        )

        # Resources
        s3_backend_bucket = s3.S3Bucket(self, "s3_backend_bucket",
        bucket="cdktf-remote-backend-2",
        )

        dynamodb_lock_table = dynamodb.DynamodbTable(self, "dynamodb_lock_table",
        name="cdktf-remote-backend-lock-2",
        billing_mode="PAY_PER_REQUEST",
        attribute=[
            {
                "name": "LockID",
                "type": "S"
            }
        ],
        hash_key="LockID",
        )



app = App()
MyStack(app, "first_project")

app.synth()


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;cdktf synth&lt;/code&gt; to generate the Terraform configuration files. The Terraform configuration files will be generated in the &lt;code&gt;cdktf.out&lt;/code&gt; directory.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cdktf synth


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;To migrate the local state backend to an S3 remote backend, navigate to the &lt;code&gt;cdktf.out/stacks/first_project&lt;/code&gt; directory and run the following command to start the migration process. The &lt;code&gt;first_project&lt;/code&gt; is the name of the project. If you have named your project differently, navigate to the &lt;code&gt;cdktf.out/stacks/&amp;lt;project_name&amp;gt;&lt;/code&gt; directory and run the command.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note: before running this command, read Important Notes below&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd cdktf.out/stacks/first_project


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform init --migrate-state


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Important Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you run &lt;code&gt;terraform init --migrate-state&lt;/code&gt;, Terraform prompts you to answer the following question:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Do you want to copy existing state to the new backend?&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;A. If you enter, &lt;code&gt;yes&lt;/code&gt; to migrate the state file to the S3 backend, CDKTF will manage the S3 remote backend (S3 bucket and DynamoDB table) for you. Therefore, if you delete the stack, the S3 bucket and DynamoDB table will be vurnerable to deletion. Note, we can't delete a non-empty S3 unless we add &lt;code&gt;force_destroy=True&lt;/code&gt; to the S3 bucket configuration. This option is not recommended if you want to keep the S3 bucket and DynamoDB table, especially if you are using the S3 bucket and DynamoDB table as a remote backend for other Terraform projects. But, if you are just experimenting with CDKTF, this option is fine.&lt;/p&gt;

&lt;p&gt;B. If you enter &lt;code&gt;no&lt;/code&gt;, to migrate the state file to the S3 backend, CDKTF will not manage the S3 bucket and DynamoDB table. If you delete the stack, the S3 bucket and DynamoDB table will not be deleted and you will have to manually delete the S3 bucket and DynamoDB table. Moreover, you will also need to remove the S3 bucket and DynamoDB table constructs from the &lt;code&gt;main.py&lt;/code&gt; file. &lt;/p&gt;

&lt;p&gt;To read more about initializing remote backend manually, refer to the &lt;a href="https://www.terraform.io/cdktf/concepts/remote-backends#:~:text=All%20cdktf%20operations%20perform%20an%20automatic%20terraform%20init%2C%20but%20you%20can%20also%20initialize%20manually" rel="noopener noreferrer"&gt;Terraform documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The below terminal recording demonstrates the steps above. In the recording, I have shown the error message that you may get if you attempt to run &lt;code&gt;cdktf deploy&lt;/code&gt; prior to reconfiguring from local backend to an S3 remote backend. &lt;br&gt;
I have entered &lt;code&gt;yes&lt;/code&gt; to migrate the state file to the S3 remote backend and let the CDKTF manage the S3 bucket and DynamoDB table. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://asciinema.org/a/523574" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fasciinema.org%2Fa%2F523574.svg" alt="asciicast"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;cdktf diff&lt;/code&gt; from the project root directory to compare the current state of the stack with the desired state of the stack. The output should be empty, which means there are no changes to be made and the state file is up to date.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cdktf diff


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: you have to be in the project root directory to run &lt;code&gt;cdktf diff&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Great! You have successfully migrated the local state backend to an S3 remote backend. Way to go, you have achieved another milestone! 🎉🎉🎉&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Learn How to Use Construct Hub and AWS Provider Submodules
&lt;/h2&gt;

&lt;p&gt;Prior to digging into the AWS provider, let's first understand most commonly used terms in the CDKTF documentation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Submodules is a collection of related resources. For example, the &lt;code&gt;s3.S3Bucket&lt;/code&gt; construct is part of the &lt;code&gt;s3&lt;/code&gt; submodule. The &lt;code&gt;s3.S3Bucket&lt;/code&gt; construct creates an S3 bucket. The &lt;code&gt;s3&lt;/code&gt; submodule contains other constructs such as &lt;code&gt;s3.S3BucketPolicy&lt;/code&gt;, &lt;code&gt;s3.S3BucketAcl&lt;/code&gt;, &lt;code&gt;s3.S3BucketObject&lt;/code&gt;, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://developer.hashicorp.com/terraform/cdktf/concepts/constructs" rel="noopener noreferrer"&gt;Construct&lt;/a&gt; is another important term to understand. A construct is a class that represents a Terraform resource, data source, or provider. The &lt;code&gt;s3&lt;/code&gt; submodule contains classes that represent S3 constructs, classes and structs. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stack is a collection of constructs. The &lt;code&gt;MyStack&lt;/code&gt; class in the &lt;code&gt;main.py&lt;/code&gt; file is a stack. The &lt;code&gt;MyStack&lt;/code&gt; class in the &lt;code&gt;main.py&lt;/code&gt; file is a stack. The &lt;code&gt;MyStack&lt;/code&gt; class contains constructs that represent e.g. Terraform resources, data sources, and providers.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Scenario 1: S3 Bucket
&lt;/h3&gt;

&lt;p&gt;Let's say we would like to create an S3 bucket but we don't know which construct to use. Let's head to the Python &lt;a href="https://constructs.dev/packages/@cdktf/provider-aws/v/9.0.33?lang=python" rel="noopener noreferrer"&gt;Construct Hub&lt;/a&gt; for the AWS provider and follow the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the left hand side and under &lt;strong&gt;Documentation&lt;/strong&gt;, click on &lt;strong&gt;Choose Submodule&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;In the search box, type in &lt;strong&gt;s3&lt;/strong&gt; and then click on the result, which is &lt;strong&gt;s3&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Submodule:s3&lt;/strong&gt;, you will see a list of &lt;strong&gt;Constructs&lt;/strong&gt; and &lt;strong&gt;Structs&lt;/strong&gt;. Click on &lt;strong&gt;S3Bucket&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;To create an S3 bucket, you will import the &lt;strong&gt;s3&lt;/strong&gt; submodule as shown under &lt;code&gt;Initializers&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The construct to use is &lt;code&gt;s3.S3Bucket&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;We need to find the required configurations for the &lt;code&gt;s3.S3Bucket&lt;/code&gt; construct. Scan the page and look for configurations marked &lt;code&gt;Required&lt;/code&gt;. In this case, S3 bucket does not have any required configuration, not even a name. If you leave the name argument empty, the S3 bucket will be created with a random name. We can specify a name for the S3 bucket and other configurations, but this is not required. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We have to distinguish between required and optional configurations. Required configurations must be specified when creating a resource. Optional configurations can be specified when creating a resource.&lt;/p&gt;

&lt;p&gt;This is the code snippet for creating an S3 bucket with minimal configurations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

my_3bucket= s3.S3Bucket(self, "s3_bucket")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if you leave the name argument empty, the S3 bucket will be created with a random name.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Scenario 2: ECS Cluster
&lt;/h3&gt;

&lt;p&gt;This time let's say we would like to create an ECS cluster. Let's head to the Python &lt;a href="https://constructs.dev/packages/@cdktf/provider-aws/v/9.0.33?lang=python" rel="noopener noreferrer"&gt;Construct Hub&lt;/a&gt; for the AWS provider and follow the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the left hand side and under &lt;strong&gt;Documentation&lt;/strong&gt;, click on &lt;strong&gt;Choose Submodule&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In the search box, type in &lt;strong&gt;ecs&lt;/strong&gt; and then click on the result, which is &lt;strong&gt;ecs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Submodule:ecs&lt;/strong&gt;, you will see a list of &lt;strong&gt;Constructs&lt;/strong&gt; and &lt;strong&gt;Structs&lt;/strong&gt;. Click on &lt;strong&gt;EcsCluster&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;To create an ECS cluster, you will import &lt;strong&gt;ecs&lt;/strong&gt; class as shown under &lt;code&gt;Initializers&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The construct to use is &lt;code&gt;ecs.EcsCluster&lt;/code&gt; as shown below.&lt;/li&gt;
&lt;li&gt;We need to find the required configurations for the &lt;code&gt;ecs.EcsCluster&lt;/code&gt; construct. The minimum required configurations to create an ECS cluster is just the &lt;code&gt;name&lt;/code&gt; of the cluster. But, we can also specify other configurations such as, &lt;code&gt;capacity_providers&lt;/code&gt;, &lt;code&gt;default_capacity_provider_strategy&lt;/code&gt;, &lt;code&gt;configuration&lt;/code&gt;, etc. &lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

my_ecs_cluster = ecs.EcsCluster(self, "my_ecs_cluster",
name = "My_Cluster"
)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5p2qcrg2nvflu4ci3g1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5p2qcrg2nvflu4ci3g1n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  CDKTF Commands:
&lt;/h3&gt;

&lt;p&gt;There are several &lt;a href="https://developer.hashicorp.com/terraform/cdktf/cli-reference/commands" rel="noopener noreferrer"&gt;CDKTF commands&lt;/a&gt; that we need to be familiar with. The below table shows the commands, their descriptions and the corresponding Terraform commands.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Commands&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Aliases&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;cdktf init&lt;/td&gt;
&lt;td&gt;Create a new cdktf project from a template.&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf get&lt;/td&gt;
&lt;td&gt;Generate CDK Constructs for Terraform providers and modules.&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf convert&lt;/td&gt;
&lt;td&gt;Converts a single file of HCL configuration to CDK for Terraform. Takes the file to be converted on stdin.&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf deploy&lt;/td&gt;
&lt;td&gt;Deploy the given stacks&lt;/td&gt;
&lt;td&gt;[aliases: apply]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf destroy&lt;/td&gt;
&lt;td&gt;Destroy the given stacks&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf diff&lt;/td&gt;
&lt;td&gt;Perform a diff (terraform plan) for the given stack&lt;/td&gt;
&lt;td&gt;[aliases: plan]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf list&lt;/td&gt;
&lt;td&gt;List stacks in app.&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf login&lt;/td&gt;
&lt;td&gt;Retrieves an API token to connect to Terraform Cloud or Terraform Enterprise.&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf synth&lt;/td&gt;
&lt;td&gt;Synthesizes Terraform code for the given app in a directory.&lt;/td&gt;
&lt;td&gt;[aliases: synthesize]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf watch&lt;/td&gt;
&lt;td&gt;[experimental] Watch for file changes and automatically trigger a deploy&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf output&lt;/td&gt;
&lt;td&gt;Prints the output of stacks&lt;/td&gt;
&lt;td&gt;[aliases: outputs]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf debug&lt;/td&gt;
&lt;td&gt;Get debug information about the current project and environment&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf provider&lt;/td&gt;
&lt;td&gt;A set of subcommands that facilitates provider management&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cdktf completion&lt;/td&gt;
&lt;td&gt;generate completion script&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To find out more about the &lt;code&gt;cdktf&lt;/code&gt; commands, run &lt;code&gt;cdktf [command] --help&lt;/code&gt; and replace &lt;code&gt;[command]&lt;/code&gt; with the command you want to learn more about. &lt;/p&gt;

&lt;p&gt;For example, to learn more about the &lt;code&gt;cdktf deploy&lt;/code&gt; command, run &lt;code&gt;cdktf deploy --help&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Deploying a Lambda Function URL using CDKTF
&lt;/h2&gt;

&lt;p&gt;CDKTF is a great tool to provision AWS resources. We have already created an S3 bucket and DynamoDB table in the previous section. In this section, I will show you how to create a Lambda function using CDKTF with function url enabled. The lambda will host a simple static web page, and well configure the function url as an output. The process requires creating an IAM role and attaching a policy to the role. I will also introduce you to multiple concepts in CDKTF. &lt;/p&gt;

&lt;p&gt;In this section, I will cover the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to create an IAM role for Lambda function&lt;/li&gt;
&lt;li&gt;How to attach a policy to the IAM role&lt;/li&gt;
&lt;li&gt;How to create a Lambda function&lt;/li&gt;
&lt;li&gt;How to enable function url for the Lambda function&lt;/li&gt;
&lt;li&gt;How to package a Lambda function from a local directory and python file&lt;/li&gt;
&lt;li&gt;How to create an output for the Lambda function url&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Buckle up, we are going to learn a lot in this section! 🚀🚀🚀&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Follow the steps below to deploy a Lambda function URL using CDKTF:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Firstly, let's keep out project organized and create a new directory called &lt;code&gt;lambda&lt;/code&gt; in the root directory of the project. This is where we will store our Lambda function code. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new file called &lt;code&gt;lambda_function.py&lt;/code&gt; in the &lt;code&gt;lambda&lt;/code&gt; directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the &lt;code&gt;lambda_function.py&lt;/code&gt; code from my &lt;a href="https://raw.githubusercontent.com/OmarCloud20/CDKTF-Tutorial/main/lambda/lambda_function.py" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; and paste it into the &lt;code&gt;lambda_function.py&lt;/code&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I will go over the &lt;code&gt;main.py&lt;/code&gt; code and the final &lt;code&gt;main.py&lt;/code&gt; file will provided at the end of the section. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;code&gt;main.py&lt;/code&gt; file, import the &lt;code&gt;TerraformOutput&lt;/code&gt;, &lt;code&gt;TerraformAsset&lt;/code&gt; and &lt;code&gt;AssetType&lt;/code&gt; classes from the &lt;code&gt;cdktf&lt;/code&gt; module. The &lt;a href="https://developer.hashicorp.com/terraform/cdktf/concepts/assets" rel="noopener noreferrer"&gt;Asset&lt;/a&gt; construct was introduced in CDK for Terraform v0.4+ and is used to package our local directory and python file into a zip file. The &lt;code&gt;TerraformOutput&lt;/code&gt; construct is used to create an output for the Lambda function url. The &lt;code&gt;AssetType&lt;/code&gt; is used to specify the type of asset. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final import statements should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from constructs import Construct
from cdktf import App, TerraformStack, S3Backend, TerraformOutput, TerraformAsset, AssetType
from cdktf_cdktf_provider_aws import AwsProvider, s3, dynamodb, iam, lambdafunction
import os
import os.path as Path


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note, we have imported &lt;code&gt;os&lt;/code&gt; and &lt;code&gt;os.path as Path&lt;/code&gt; modules. We will use these modules to get the current working directory and to join the path to the lambda directory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;TerraformAsset&lt;/code&gt; construct requires a &lt;code&gt;path&lt;/code&gt; argument. The &lt;code&gt;path&lt;/code&gt; argument is the path to the directory or file that you want to package. In this case, we will use the &lt;code&gt;os&lt;/code&gt; module to get the current working directory and then join the path to the &lt;code&gt;lambda&lt;/code&gt; directory. The &lt;code&gt;AssetType.ARCHIVE&lt;/code&gt; is to specify that the asset should produce an archive. The final &lt;code&gt;path&lt;/code&gt; argument should look like this:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

asset = TerraformAsset(self, "lambda_file",
path = Path.join(os.getcwd(), "lambda"),
type = AssetType.ARCHIVE,
)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Creating a Lambda function requires creating an IAM role. Therefore, we will create an IAM role first. We will also attach the &lt;code&gt;AWSLambdaBasicExecutionRole&lt;/code&gt; AWS managed policy to the IAM role. This policy allows the Lambda function to write logs to CloudWatch. Refer to AWS documentation for more information about the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html#:~:text=June%2017%2C%202022-,AWSLambdaBasicExecutionRole,-%E2%80%93%20Lambda%20started%20tracking" rel="noopener noreferrer"&gt;AWSLambdaBasicExecutionRole&lt;/a&gt; policy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;assume_role_policy&lt;/code&gt; argument is the policy that grants permission to assume the IAM role, which requires a JSON string. The JSON string is the policy document that grants permission to assume the IAM role. The final snippet of code should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

        lambda_role = iam.IamRole(self, "lambda_role",
        name="my-lambda-url-role",
        managed_policy_arns=[
            "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
        ],
        assume_role_policy="""{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": "sts:AssumeRole",
                    "Principal": {
                        "Service": "lambda.amazonaws.com"
                    },
                    "Effect": "Allow",
                    "Sid": ""
                }
            ]
        }""",
        )



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Now, we are ready to create a lambda function. The &lt;code&gt;handler&lt;/code&gt; configuration is the name of the Python file that contains the lambda function. The &lt;code&gt;runtime&lt;/code&gt; provides a language-specific environment that runs in an execution environment. The &lt;code&gt;source_code_hash&lt;/code&gt; argument is the hash of the file that contains the lambda function. The &lt;code&gt;source_code_hash&lt;/code&gt; argument is required to trigger a new deployment when the lambda function code changes. The &lt;code&gt;filename&lt;/code&gt; argument is the path to the file that contains the lambda function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final snippet of code should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

        my_lambda = lambdafunction.LambdaFunction(self, "my_lambda",
        function_name="my-lambda-url",
        handler="lambda_function.lambda_handler",
        role=lambda_role.arn,
        runtime="python3.9",
        source_code_hash = asset.asset_hash,
        filename=asset.path,
        )


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;We need to enable function url for the lambda function and define an &lt;code&gt;authorization_type&lt;/code&gt; for the function url. The &lt;code&gt;authorization_type&lt;/code&gt; argument is the type of authorization that is used to invoke the function url. The &lt;code&gt;authorization_type&lt;/code&gt; argument can be set to &lt;code&gt;NONE&lt;/code&gt; or &lt;code&gt;AWS_IAM&lt;/code&gt;. The final snippet of code should look like this:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

        my_lambda_url = lambdafunction.LambdaFunctionUrl(self, "my_lambda_url",
        function_name=my_lambda.function_name,
        authorization_type="NONE",
        )


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Finally, we need to create an output for the Lambda function url. The &lt;code&gt;value&lt;/code&gt; argument is the value of the output. The &lt;code&gt;value&lt;/code&gt; argument can be a string, number, boolean, or a list. The final snippet of code should look like this:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

        TerraformOutput(self, "lambda_url",
        value=my_lambda_url.invoke_url,
        )


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The final &lt;code&gt;main.py&lt;/code&gt; code should look like this:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

from constructs import Construct
from cdktf import App, TerraformStack, S3Backend, TerraformOutput, TerraformAsset, AssetType
from cdktf_cdktf_provider_aws import AwsProvider, s3, dynamodb, iam, lambdafunction
import os
import os.path as Path

class MyStack(TerraformStack):
    def __init__(self, scope: Construct, ns: str):
        super().__init__(scope, ns)

        AwsProvider(self, "AWS", region="us-east-1", profile="CDKTF")

        #S3 Remote Backend
        S3Backend(self,
        bucket="cdktf-remote-backend-2",
        key="first_project/terraform.tfstate",
        encrypt=True,
        region="us-east-1",
        dynamodb_table="cdktf-remote-backend-lock-2",
        profile="CDKTF",
        )

        # Resources
        s3_backend_bucket = s3.S3Bucket(self, "s3_backend_bucket",
        bucket="cdktf-remote-backend-2",
        )

        dynamodb_lock_table = dynamodb.DynamodbTable(self, "dynamodb_lock_table",
        name="cdktf-remote-backend-lock-2",
        billing_mode="PAY_PER_REQUEST",
        attribute=[
            {
                "name": "LockID",
                "type": "S"
            }
        ],
        hash_key="LockID",
        )

        # Asset for Lambda Function
        asset = TerraformAsset(self, "lambda_file",
        path = Path.join(os.getcwd(), "lambda"),
        type = AssetType.ARCHIVE,
        )

        # IAM Role for Lambda Function
        lambda_role = iam.IamRole(self, "lambda_role",
        name="my-lambda-url-role",
        managed_policy_arns=[
            "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
        ],
        assume_role_policy="""{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": "sts:AssumeRole",
                    "Principal": {
                        "Service": "lambda.amazonaws.com"
                    },
                    "Effect": "Allow",
                    "Sid": ""
                }
            ]
        }""",
        )

        # Lambda Function
        my_lambda = lambdafunction.LambdaFunction(self, "my_lambda",
        function_name="my-lambda-url",
        handler="lambda_function.lambda_handler",
        role=lambda_role.arn,
        runtime="python3.9",
        source_code_hash = asset.asset_hash,
        filename=asset.path,
        )

        # Lambda Function Url
        my_lambda_url = lambdafunction.LambdaFunctionUrl(self, "my_lambda_url",
        function_name=my_lambda.function_name,
        authorization_type="NONE",
        )



        # Outputs for Lambda Function Url
        TerraformOutput(self, "lambda_url",
        value=my_lambda_url.function_url,
        )


app = App()
MyStack(app, "first_project")

app.synth()


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;cdktf deploy&lt;/code&gt; to deploy the stack. 
```
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cdktf deploy&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&amp;gt;Note: you can also run `cdktf deploy --auto-approve` to deploy the stack without confirmation. However, I would suggest to refrain from using this option in production unless you are absolutely sure that you want to deploy the stack without confirmation.

Finally, grab the `lambda_url` output and paste it in your browser. If you see the dancing bananas, then you have successfully deployed your first lambda function using CDKTF.



![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s58zzzwtfkb32h251l0u.png)


&amp;lt;br&amp;gt;

Congratulations! You have successfully deployed a lambda function with a function url using CDKTF. I'm sure you feel like you are on top of the world right now. Well done!

---


**To delete the stack, we need to follow the below steps:**

Note, we chose to allow CDKTF to manage the remote backend. This means that CDKTF will delete the remote backend (the S3 bucket and DynamoDB Table) when we delete the stack. There are many methods to delete the stack, but I find the below method to be the easiest. Let's go through the steps:


A. Add `force_destroy=True` to the `s3_backend_bucket` configurations. The S3 bucket cannot be deleted if it is not empty. This is the reason why we need to add `force_destroy=True`.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    s3_backend_bucket = s3.S3Bucket(self, "s3_backend_bucket",
    bucket="cdktf-remote-backend-2",
    force_destroy=True
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
B. Run `cdktf deploy` to update the S3 bucket configurations:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;cdktf deploy&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
C. Run `cdktf destroy` to delete the entire stack:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;cdktf destroy&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&amp;gt;Note: after running `cdktf destroy`, you will get an error message saying `failed to retrieve lock info`. This is expected due to the fact the dynamodb table is deleted. You can ignore this error message.

---


## Conclusion

In this tutorial, we have learned how to properly install and configure CDKTF, how to migrate from a local backend to an S3 remote backend. We have also learned how to deploy a lambda function with a function url using CDKTF. We have also briefly learned how to read and utilize the CDKTF documentation from the Construct Hub.

The most important thing to remember is that CDKTF is still in its early stages. I am confident that CDKTF will be a great tool for managing Terraform stacks in the near future.


Congratulations on completing this tutorial and overcoming several challenges. You have achieved many learning milestones. I hope this tutorial added value to your learning journey. Thank you for reading! 

&amp;lt;br&amp;gt;

**Omar A Omar**
Site Reliability Engineer
AWS Community Builder
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>cdktf</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>Building a Patching Model using AWS Systems Manager - Patch Manager for Mutable Infrastructure</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Fri, 08 Jul 2022 14:24:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-patching-model-using-aws-systems-manager-patch-manager-for-mutable-infrastructure-4739</link>
      <guid>https://dev.to/aws-builders/building-a-patching-model-using-aws-systems-manager-patch-manager-for-mutable-infrastructure-4739</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Building a Patching Model using AWS Systems Manager - Patch Manager for Mutable Infrastructure

&lt;ul&gt;
&lt;li&gt;
AWS Systems Manager Patch Manager

&lt;ul&gt;
&lt;li&gt;What is the Patch Baseline?&lt;/li&gt;
&lt;li&gt;How Patch Baseline Rules Work on Amazon Linux 2&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Patching Model Solution Architecture 

&lt;ul&gt;
&lt;li&gt;Architecture Diagram&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

How to Create a Patching Model for a Single Managed Amazon Linux 2 EC2 Instance - Step by Step

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;How to Create SSM Documents&lt;/li&gt;
&lt;li&gt;
Create IAM Service Role for Maintenance Window

&lt;ul&gt;
&lt;li&gt;Create a policy&lt;/li&gt;
&lt;li&gt;Create IAM resource role&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Create and add inline policies to the EC2 IAM instance profile&lt;/li&gt;

&lt;li&gt;Step 1: Create a Custom Patch Baseline&lt;/li&gt;

&lt;li&gt;Step 2: Tagging the EC2 Instance with the Patch Group Key-Value&lt;/li&gt;

&lt;li&gt;Step 3: Assigning a Patch Group to the Patch Baseline&lt;/li&gt;

&lt;li&gt;Step 4: Creating a Maintenance Window&lt;/li&gt;

&lt;li&gt;Step 5: Registering Targets to the Maintenance Window&lt;/li&gt;

&lt;li&gt;Step 6: Assigning Tasks to the Maintenance Window&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  July 8th, 2022
&lt;/h2&gt;

&lt;h2&gt;
  
  
  AWS Systems Manager Patch Manager
&lt;/h2&gt;

&lt;p&gt;Patch Manager is a capability of AWS Systems Manager. It applies and automates the patching process of managed nodes for both security related and other types of updates, which makes it a powerful tool for &lt;a href="https://docs.aws.amazon.com/managedservices/latest/appguide/compute-instance-mutability-aog.html#:~:text=is%20completely%20deployed.-,Mutable,-%3A%20In%20this%20model" rel="noopener noreferrer"&gt;mutable&lt;/a&gt; infrastructure model. It's capable of patching operating systems as well as applications. With the use of &lt;code&gt;patch baseline&lt;/code&gt;, Patch Manager includes rules for auto-approving patches and for creating a list of approved or rejected patches. Patches can be installed individually or to a large groups of managed instances using tags. &lt;br&gt;
To schedule patching, Patch Manager runs tasks of &lt;code&gt;Maintenance Window&lt;/code&gt;, which is another capability of Systems Manager. Although using Patch Manager is not the only patching option, it is one of the most straightforward and practical approaches. This patching model tutorial is to provide a proof of concept for patching a single managed Amazon Linux 2 instance. The tutorial's goal is to equip cloud teams the know-how they need to start using the Patch Manager. As a result, the tutorial's expertise and knowledge are to be applied to a fleet of managed instances or on-premises servers.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Patch Manager doesn't support upgrading major versions of operating systems.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What is the Patch Baseline?
&lt;/h3&gt;

&lt;p&gt;Patch Manager provides predefined baselines for each supported operating systems. The service uses the native package manager to drive the installation of patches approved by the patch baseline.  For Amazon Linux 2, the predefined baseline is to approve all operating systems patches classified as &lt;code&gt;Security&lt;/code&gt; and have a severity level of &lt;code&gt;Critical&lt;/code&gt; or &lt;code&gt;Important&lt;/code&gt;. The patches are auto-approved 7 days after release. Moreover, all patches classified &lt;code&gt;Bugfix&lt;/code&gt; are also auto-approved 7 days after release. The predefined patch baseline are not customizable. However, it's feasible to create a custom patch baseline to control patch classifications, approval/rejection and auto-approve days after release. &lt;/p&gt;

&lt;h3&gt;
  
  
  How Patch Baseline Rules Work on Amazon Linux 2
&lt;/h3&gt;

&lt;p&gt;Below is the guidelines for the packages selected for update: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Security option&lt;/th&gt;
&lt;th&gt;Equivalent Yum Command&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pre-defined default patch baselines provided by Amazon (non-security updates option is not selected)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;sudo yum update-minimal --sec-severity=critical,important --bugfix -y&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;User Custom patch baselines (non-security updates option is selected)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;sudo yum update --security --bugfix -y&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Please, refer to &lt;a href="https://docs.amazonaws.cn/en_us/systems-manager/latest/userguide/patch-manager-how-it-works-selection.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for further information on how security patches are selected. &lt;/p&gt;




&lt;h2&gt;
  
  
  Patching Model Solution Architecture
&lt;/h2&gt;

&lt;p&gt;The plan is to utilize Patch Manager to develop a patching model for a single managed Amazon Linux 2 EC2 instance by using tags. Based on a scheduled maintenance window, an SSM Agent, installed on the EC2 instance, receives command issued to commence a patching process. The agent validates the instance's &lt;code&gt;Patch Group&lt;/code&gt; tag-value and queries Patch Manager for an associated patch baseline. Once Patch Manager confirms the patch baseline for the Patch Group tag-value, it notifies the SSM Agent to retrieve the patch baseline snapshot. Finally, the SSM Agent begins scanning and installing patches based on the rules defined in the patch baseline snapshot provided by Patch Manager. &lt;/p&gt;

&lt;p&gt;The&lt;code&gt;AWS-RunPatchBaselineWithHooks&lt;/code&gt; SSM document will be used to orchestrate multi-step installing patches. It offers three optional hooks which allows running SSM documents at three points during the patching cycle (pre-install, post-patch and post-reboot). We will create three simple SSM documents, as a proof of concept, to be used during the patching cycle. To read more about AWS-RunPatchBaselineWithHooks SSM document , please refer to &lt;a href="https://aws.amazon.com/blogs/mt/orchestrating-custom-patch-processes-aws-systems-manager-patch-manager/" rel="noopener noreferrer"&gt;AWS blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Kernel Live Patching&lt;/code&gt; is another feature available for Amazon Linux 2. It allows applying patches without the need for an immediate reboot or any disruption to running applications. We will not be using this feature for our patching model but I would like to recommend considering it. For more information about Kernel Live Patching, please refer to &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;. Please, bear in mind in case there is a need to run patching outside of the maintenance window for a fleet of instances, it's recommend by AWS, as best practice, to provide a &lt;strong&gt;Snapshot-ID&lt;/strong&gt;. It ensures consistency among the targeted instances. Please, refer to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-about-aws-runpatchbaseline.html#patch-manager-about-aws-runpatchbaseline-parameters-snapshot-id" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more information. In this patching model, we have allocated a &lt;code&gt;Patch Group&lt;/code&gt; tag per instance; therefore, no snapshot id is needed for patching outside the maintenance window.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: it's highly recommended to test the patching model in a development environment prior to deploying to production. Also, for multi-account and multi-region patching, please refer to &lt;a href="https://aws.amazon.com/blogs/mt/centralized-multi-account-and-multi-region-patching-with-aws-systems-manager-automation/" rel="noopener noreferrer"&gt;AW blog&lt;/a&gt; for more information. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Architecture Diagram - Patching Model using Patch Manager
&lt;/h3&gt;

&lt;p&gt;This patching model should work as a proof of concept. A similar concept to this architecture can be applied to fleets of managed EC2 instances and on-premises servers. &lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faevrtr1s586g7qjtj08v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faevrtr1s586g7qjtj08v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: the lambda functions to Slack channel and to PagerDuty are not included in this tutorial.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How to Create Patching Model for Single Managed Amazon Linux 2 EC2 Instance - Step by Step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;SSM Agent version 3.0.502 or later (requirement for the AWS-RunPatchBaselineWithHooks SSM document)&lt;/li&gt;
&lt;li&gt;Internet connectivity. The managed instance must have access to the source patch repositories. &lt;/li&gt;
&lt;li&gt;Minimum operating system - Amazon Linux 2 2 - 2.0&lt;/li&gt;
&lt;li&gt;IAM service role for Systems Manager to run Maintenance Window tasks&lt;/li&gt;
&lt;li&gt;Optionally, adding policies to the IAM service role for SNS and CloudWatch logs&lt;/li&gt;
&lt;li&gt;Optionally, a preconfigured S3 bucket to receive the patching command logs&lt;/li&gt;
&lt;li&gt;Optionally, a preconfigured SNS topic for patching event notifications&lt;/li&gt;
&lt;li&gt;Optionally, adding S3 and CloudWatch logs policies to the EC2 instance profile role &lt;/li&gt;
&lt;/ol&gt;






&lt;h3&gt;
  
  
  Step 1: Create a Custom Patch Baseline
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Systems Manager&lt;/strong&gt; console. Under &lt;strong&gt;Node Management&lt;/strong&gt;, select &lt;strong&gt;Patch Manager&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 1&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovpikztvl377p4deztli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovpikztvl377p4deztli.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;View predefined patch baselines&lt;/strong&gt; and click on &lt;strong&gt;Create patch baseline&lt;/strong&gt;. The, fill out the requirements as following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Patch baseline details&lt;/strong&gt;: 

&lt;ol&gt;
&lt;li&gt;Name: give the custom baseline a name such as, &lt;strong&gt;AmazonLinux2AllPatchesBaseline&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Description: optionally, add a description for the baseline such as, &lt;strong&gt;Amazon Linux2 All Patches Baseline&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Operating system: from the dropdown menu, select &lt;strong&gt;Amazon Linux 2&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Under &lt;strong&gt;Approval rule for operating systems&lt;/strong&gt;: 

&lt;ol&gt;
&lt;li&gt;Product: select &lt;strong&gt;All&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Classification: &lt;strong&gt;All&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Severity: &lt;strong&gt;All&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Auto-approval: leave the default selected option, &lt;strong&gt;Approve patches after a specified number of days&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Specify the number of days: leave the default &lt;strong&gt;0&lt;/strong&gt; days&lt;/li&gt;
&lt;li&gt;Compliance reporting - optional: leave the default selected option, &lt;strong&gt;Unspecified&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Include nonsecurity updates: check the box to install nonsecurity patches&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Under &lt;strong&gt;Patch exceptions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rejected patches - optional: add &lt;code&gt;system-release.*&lt;/code&gt; as shown below. &lt;/li&gt;
&lt;li&gt;Rejected patches action - optional: select &lt;strong&gt;Block&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the purpose of this block is to reject patches to new Amazon Linux releases beyond the Patch Manager supported operating systems.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Patch sources&lt;/strong&gt;: we will not add other source. Please, refer to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-how-it-works-alt-source-repository.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; on how to define alternative patch source repository&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Manage tags&lt;/strong&gt;: optionally define tags for the patch baseline&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 2&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifxsdko0esofozwumzfh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifxsdko0esofozwumzfh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Assigning a Patch Group to the Patch Baseline
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Systems Manager&lt;/strong&gt; console. Under &lt;strong&gt;Node Management&lt;/strong&gt;, select &lt;strong&gt;Patch Manager&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Patch baseline&lt;/strong&gt; tab and select the previously created custom baseline name &lt;strong&gt;AmazonLinux2AllPatchesBaseline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on the &lt;strong&gt;Baseline ID&lt;/strong&gt; which is leading to the baseline details page. &lt;/li&gt;
&lt;li&gt;On Actions button, select &lt;strong&gt;Modify patch groups&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Patch groups&lt;/strong&gt; textbox, type in the value we defined in step 2, which is &lt;strong&gt;WebServer-Prd&lt;/strong&gt;. Then, click on &lt;strong&gt;Add&lt;/strong&gt; button and &lt;strong&gt;Close&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 3 - 5&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi451ls6r9oxao8ucln78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi451ls6r9oxao8ucln78.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuajszhus7kkq7s9jeaoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuajszhus7kkq7s9jeaoh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqhdiq2a4gc0vjukmk3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqhdiq2a4gc0vjukmk3t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Creating a Maintenance Window
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Systems Manager&lt;/strong&gt; console. Under &lt;strong&gt;Change Management&lt;/strong&gt;, select &lt;strong&gt;Maintenance Windows&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on &lt;strong&gt;Create maintenance window&lt;/strong&gt; and fill out the requirements as following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Provide maintenance window details&lt;/strong&gt;:

&lt;ol&gt;
&lt;li&gt;Name: give the maintenance window a name such as, &lt;strong&gt;Patch-WebServer-Prd&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Description: optionally, add a description for the maintenance window such as, &lt;strong&gt;Patching Maintenance Window for Patching WebServer in production&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Unregistered targets: uncheck this option. If the option is selected, it allows to register instances that are not part of the targets.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Under &lt;strong&gt;Schedule&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Specify with: select &lt;strong&gt;CRON/Rate expression&lt;/strong&gt;. For our scenario, we will run patching on the first Wednesday of each month at 9:00pm CDT. Therefore, the CRON job is &lt;strong&gt;cron(0 21 ? * WED#1 *)&lt;/strong&gt;. We will define the timezone shortly. 
&amp;gt;Note: for testing, you can use &lt;strong&gt;Rate schedule&lt;/strong&gt; to the run the patching job every hour or so. Also, refer to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/reference-cron-and-rate-expressions.html#reference-cron-and-rate-expressions-maintenance-window" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more information about cron and rate expressions for Systems Manager.
&lt;/li&gt;
&lt;li&gt;Duration: the number of hours the maintenance window will run. Type in &lt;strong&gt;2&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Stop initiating tasks: the number of hours before the end of the maintenance window that the systems should stop scheduling new tasks to run before the window closes. Type in &lt;strong&gt;1&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Window start date: select the starting date and time of this maintenance window such as, 7/6/2022 20:00pm GMT-05:00 (the GMT-05:00 is a conversion to US/Cental time). &lt;/li&gt;
&lt;li&gt;Window end date: you may define an end date for the maintenance window. For our scenario, leave it empty. &lt;/li&gt;
&lt;li&gt;Schedule timezone: select &lt;strong&gt;US/Central&lt;/strong&gt; timezone. &lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the Central Daylight Time (CDT) is &lt;strong&gt;-5&lt;/strong&gt; hours from &lt;strong&gt;GMT&lt;/strong&gt;;therefore, the maintenance window is in affect starting from the 6th of July at 8:00pm CDT. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Schedule offset: leave it empty. &lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Manage tags&lt;/strong&gt;: optionally define tags for the maintenance window&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Finally, click on &lt;strong&gt;Create maintenance window&lt;/strong&gt; to complete the process.&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 6&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs86naeab8i5oys3f5dh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs86naeab8i5oys3f5dh5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Registering Targets to the Maintenance Window
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Systems Manager&lt;/strong&gt; console. Under &lt;strong&gt;Change Management&lt;/strong&gt;, select &lt;strong&gt;Maintenance Windows&lt;/strong&gt;. Then, click &lt;strong&gt;View details&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Press the &lt;strong&gt;Actions&lt;/strong&gt; button and select &lt;strong&gt;Register targets&lt;/strong&gt; from the dropdown menu. &lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Register target&lt;/strong&gt; screen under &lt;strong&gt;maintenance window target details&lt;/strong&gt;, fill out the requirements as following:

&lt;ol&gt;
&lt;li&gt;Name: give the targets a name such as, &lt;strong&gt;WebServer-Prd&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Description: optionally, add a description for the maintenance window such as, &lt;strong&gt;The target is the Web Server in production&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Owner information: optionally, you may specify a name of the owner such as, &lt;strong&gt;The A-Engineering Team&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Under the &lt;strong&gt;Targets&lt;/strong&gt; section, for &lt;strong&gt;Target selection&lt;/strong&gt;, select &lt;strong&gt;Specify instance tags&lt;/strong&gt;. Then, enter the patch group key-value tag that we created previously and click &lt;strong&gt;Add&lt;/strong&gt; as shown below. &lt;/li&gt;

&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Patch Group&lt;/td&gt;
&lt;td&gt;WebServer-Prd&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Finally, click on &lt;strong&gt;Register target&lt;/strong&gt; to complete the process.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 7 - 9&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcdxc47ddy4jgi9ta1ix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcdxc47ddy4jgi9ta1ix.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58c60xwl7mm851nhpkg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58c60xwl7mm851nhpkg0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tovikitt1ixiuguqde9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tovikitt1ixiuguqde9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Assigning Tasks to the Maintenance Window
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Systems Manager&lt;/strong&gt; console. Under &lt;strong&gt;Change Management&lt;/strong&gt;, select &lt;strong&gt;Maintenance Windows&lt;/strong&gt;. Then, click &lt;strong&gt;View details&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Press the &lt;strong&gt;Actions&lt;/strong&gt; button and select &lt;strong&gt;Register run command task&lt;/strong&gt; from the dropdown menu. &lt;/li&gt;
&lt;li&gt;On &lt;strong&gt;Register Run command task&lt;/strong&gt; and Under &lt;strong&gt;Maintenance window task details&lt;/strong&gt; section, fill out the requirements as following:

&lt;ol&gt;
&lt;li&gt;Name: give the task a name such as, &lt;strong&gt;PatchWebServerPrd&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Description: optionally, add a description for the maintenance window such as, &lt;strong&gt;Run Command Task for patching Patching Web Server in production&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;New task invocation cutoff: check &lt;strong&gt;Enabled&lt;/strong&gt; to prevent new task invocations when the maintenance window cutoff time is reached&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Under &lt;strong&gt;Command document&lt;/strong&gt; section, search for &lt;code&gt;AWS-RunPatchBaselineWithHooks&lt;/code&gt; and then select it. Leave &lt;strong&gt;Document version&lt;/strong&gt; at the default selection &lt;strong&gt;Default Version at runtime&lt;/strong&gt; and leave the &lt;strong&gt;Task priority&lt;/strong&gt; at default value of &lt;strong&gt;1&lt;/strong&gt;.&lt;/li&gt;

&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: AWS-RunPatchBaselineWithHooks is a wrapper document for AWS-RunPatchBaseline document. It divides the patching into two events, before reboot and after reboot for a total of three hooks to support custom functionality. Refer to &lt;a href="https://docs.amazonaws.cn/en_us/systems-manager/latest/userguide/patch-manager-about-aws-runpatchbaselinewithhooks.html" rel="noopener noreferrer"&gt;About the AWS-RunPatchBaselineWithHooks SSM document&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;For the &lt;strong&gt;Targets&lt;/strong&gt; section, select &lt;strong&gt;Selecting registered target groups&lt;/strong&gt; which is the default selection. Then, from the list select the target group that we assigned to the maintenance window. &lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Rate control&lt;/strong&gt;, 

&lt;ol&gt;
&lt;li&gt;For &lt;strong&gt;Concurrency&lt;/strong&gt;, leave it at default selection, &lt;strong&gt;targets&lt;/strong&gt;, and type in &lt;strong&gt;1&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Error threshold&lt;/strong&gt;, leave it at default selection, &lt;strong&gt;errors&lt;/strong&gt;, and type in &lt;strong&gt;1&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;On the &lt;strong&gt;IAM service role&lt;/strong&gt;, select &lt;strong&gt;maintenance-window-role&lt;/strong&gt; role from the dropdown menu. &lt;/li&gt;

&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: if the IAM service role is not created yet, refer to Create IAM Service Role for Maintenance Window section. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Under &lt;strong&gt;Output options&lt;/strong&gt;, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Optionally, check &lt;strong&gt;Enable writing to S3&lt;/strong&gt; and type the name of the preconfigured S3 bucket. In my case, it's &lt;strong&gt;patching-webserver&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: to capture the complete terminal output logs, configure an S3 bucket because only the last 2500 characters of a command output are displayed in the console. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Optionally, add an &lt;strong&gt;S3 key prefix&lt;/strong&gt; such as, &lt;code&gt;patching/webserver/prd&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Check &lt;strong&gt;CloudWatch output&lt;/strong&gt; and, optionally, type in a name for the &lt;strong&gt;Log group&lt;/strong&gt; such as, &lt;code&gt;patching/webserver/prd&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note: we will have to give the required permissions to the EC instance IAM profile role to put objects to S3 and put logs to CloudWatch.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;For the &lt;strong&gt;SNS notifications&lt;/strong&gt; section&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check &lt;strong&gt;Enable SNS notifications&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select the preconfigured &lt;strong&gt;IAM role&lt;/strong&gt; for the SNS service. In our case, it's maintenance-window-role because we have added an SNS policy to it.&lt;/li&gt;
&lt;li&gt;Paste the &lt;strong&gt;SNS topic&lt;/strong&gt; ARN&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Event type&lt;/strong&gt;, leave it at default selection which is &lt;strong&gt;All&lt;/strong&gt; events&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Notification type&lt;/strong&gt;, select &lt;strong&gt;Per instance basis notification when the command status on each instance changes&lt;/strong&gt; &lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Under the &lt;strong&gt;Parameters&lt;/strong&gt; section:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For &lt;strong&gt;Operation&lt;/strong&gt;, select &lt;strong&gt;Install&lt;/strong&gt; from the dropdown menu&lt;/li&gt;
&lt;li&gt;Snapshot Id, leave it empty&lt;/li&gt;
&lt;li&gt;Reboot Option: leave it at default value, &lt;strong&gt;RebootIfNeeded&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Pre Install Hook Doc Name: type in the name of the pre install document&lt;/li&gt;
&lt;li&gt;Post Install Hook Doc Name: : type in the name of the post install document&lt;/li&gt;
&lt;li&gt;On Exit Hook Doc Name: type in the name of the on-exit document&lt;/li&gt;
&lt;li&gt;Comment: optionally, add comments about the command&lt;/li&gt;
&lt;li&gt;Timeout (seconds): leave it at default value, &lt;strong&gt;600&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Finally, click &lt;strong&gt;Register Run command task&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: for details on how to create documents, refer to How to Create SSM Document section. If you don't want to use any of the three hooks, then leave it at default value, which is &lt;strong&gt;AWS-Noop&lt;/strong&gt;. &lt;/p&gt;
&lt;/blockquote&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 10 - 11&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yuocf2ovmdjvxladwmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yuocf2ovmdjvxladwmq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc2e60go726haapvpi8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc2e60go726haapvpi8x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Assigning Patch Group Tag to the Targeted Amazon Linux 2 EC2 Instance
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Systems Manager&lt;/strong&gt; console. Under &lt;strong&gt;Node Management&lt;/strong&gt;, select &lt;strong&gt;Fleet Manager&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;From the &lt;strong&gt;Managed nodes&lt;/strong&gt; list tab, select the EC3 that is targeted for patching. Then, click on &lt;strong&gt;Node actions&lt;/strong&gt; and select &lt;strong&gt;View details&lt;/strong&gt; from dropdown menu. &lt;/li&gt;
&lt;li&gt;From the &lt;strong&gt;Tags&lt;/strong&gt; tab, click &lt;strong&gt;Edit&lt;/strong&gt; to add the below tag key-value and then press the &lt;strong&gt;Save&lt;/strong&gt; button:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Patch Group&lt;/td&gt;
&lt;td&gt;WebServer-Prd&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;b&gt;Image 12 - 14&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbkhwja0mdcc1o6fi94y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbkhwja0mdcc1o6fi94y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwa8n8ewmrsg4mq3y5kk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwa8n8ewmrsg4mq3y5kk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxl78ws4ufrd3t7pl7c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxl78ws4ufrd3t7pl7c4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;h2&gt;
  
  
  How to Create SSM Documents
&lt;/h2&gt;

&lt;p&gt;We will create three simple documents as a proof of concept. Please, refer to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/create-ssm-doc.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more details about creating SSM documents. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Systems Manager&lt;/strong&gt; console. Under &lt;strong&gt;Shared Resources&lt;/strong&gt;, select &lt;strong&gt;Documents&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create document&lt;/strong&gt; button and select &lt;strong&gt;Command or Session&lt;/strong&gt; from the dropdown menu.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Create document&lt;/strong&gt; screen:

&lt;ol&gt;
&lt;li&gt;Name: give the document a name such as, &lt;strong&gt;Pre-patch-WebServer-Document&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Target type: leave it empty &lt;/li&gt;
&lt;li&gt;Document type: leave it at default which is &lt;strong&gt;Command document&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;Content: replace the default content with the below into the &lt;strong&gt;YAML&lt;/strong&gt; editor&lt;/li&gt;
&lt;li&gt;Document tags: optionally add key-value tag to the document&lt;/li&gt;
&lt;li&gt;Finally, click &lt;strong&gt;Create document&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 15&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vq40kv8bff8sqhj23dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vq40kv8bff8sqhj23dg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
schemaVersion: '2.2'
description: Pre-patch Web Server Document
parameters: {}
mainSteps:
- action: aws:runShellScript
  name: configureServer
  inputs:
    runCommand:
    - sudo systemctl status httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Repeat the process from bullet #3 for &lt;code&gt;Post-install-WebServer-Document&lt;/code&gt; and &lt;code&gt;Post-reboot-WebServer-Document&lt;/code&gt;. The &lt;strong&gt;YAML&lt;/strong&gt; contents are below.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
schemaVersion: '2.2'
description: Post-install Web Server Document
parameters: {}
mainSteps:
- action: aws:runShellScript
  name: configureServer
  inputs:
    runCommand:
    - sudo systemctl stop httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
schemaVersion: '2.2'
description: Post-reboot Web Server Document
parameters: {}
mainSteps:
- action: aws:runShellScript
  name: configureServer
  inputs:
    runCommand:
    - sudo systemctl start httpd
    - sudo systemctl status httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;b&gt;Image 16&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvc53b5e3uqfw6svoeu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvc53b5e3uqfw6svoeu6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;h2&gt;
  
  
  Create IAM Service Role for Maintenance Window
&lt;/h2&gt;

&lt;p&gt;As Systems Manager needs permissions to run maintenance window tasks, below are the steps to create the required IAM role:&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a policy:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;From the &lt;strong&gt;IAM&lt;/strong&gt; console, navigate to &lt;strong&gt;Policies&lt;/strong&gt; and click &lt;strong&gt;Create policy&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;Click on the &lt;strong&gt;JSON&lt;/strong&gt; tab and after clearing the default content, paste the below policy and click &lt;strong&gt;Next:Tags&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand",
                "ssm:CancelCommand",
                "ssm:ListCommands",
                "ssm:ListCommandInvocations",
                "ssm:GetCommandInvocation",
                "ssm:ListTagsForResource",
                "ssm:GetParameters"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "resource-groups:ListGroups",
                "resource-groups:ListGroupResources"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "tag:GetResources"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "sns:Publish"
            ],
            "Resource": [
                "arn:aws:sns:*:*:*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": [
                        "ssm.amazonaws.com"
                    ]
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Optionally, add tag-key value pairs and click &lt;strong&gt;Next:Review&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Give the policy a name and description such as:
a. Name: &lt;strong&gt;maintenance-window-policy&lt;/strong&gt;
b. Description: &lt;strong&gt;The policy allows Systems manager to run maintenance window tasks&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Finally, click &lt;strong&gt;Create policy&lt;/strong&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Create IAM resource role:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;From the &lt;strong&gt;IAM&lt;/strong&gt; console, navigate to &lt;strong&gt;Roles&lt;/strong&gt; and click &lt;strong&gt;Create role&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Select trusted entity&lt;/strong&gt; page:

&lt;ol&gt;
&lt;li&gt;Trusted entity type: leave at default, &lt;strong&gt;AWS service&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use case - Use cases for other AWS services: search for &lt;strong&gt;Systems Manager&lt;/strong&gt; and select it (NOT Systems Manager - Inventory and Maintenance Windows)&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Next&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;On &lt;strong&gt;Add permissions&lt;/strong&gt; screen, search for the previously created policy, &lt;strong&gt;maintenance-window-policy&lt;/strong&gt; and select it. Click &lt;strong&gt;Next&lt;/strong&gt;
&lt;/li&gt;

&lt;li&gt;Give the role a name such as, &lt;strong&gt;maintenance-window-role&lt;/strong&gt; and click &lt;strong&gt;Create role&lt;/strong&gt;.&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;b&gt;Image 17 - 19&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fimvcqbb0y42uk7h310.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fimvcqbb0y42uk7h310.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuca65p07hnksunlfwi2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuca65p07hnksunlfwi2c.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsf0xbzkizbwf0ulfkx20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsf0xbzkizbwf0ulfkx20.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create and add inline policies to the EC2 IAM instance profile:
&lt;/h2&gt;

&lt;p&gt;The EC2 instance profile requires necessary permissions to write data to an S3 bucket and to put log events to CloudWatch. We will create inline policies to assign the required permissions to the EC2 instance profile.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the &lt;strong&gt;IAM&lt;/strong&gt; console, navigate to &lt;strong&gt;Roles&lt;/strong&gt; and search for your EC2 instance profile role. Then, click on the &lt;strong&gt;Role name&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under *&lt;em&gt;Permissions&lt;/em&gt; tab:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From &lt;strong&gt;Add permissions&lt;/strong&gt; button, select &lt;strong&gt;Create inline policy&lt;/strong&gt; from the dropdown menu&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;JSON&lt;/strong&gt; tab and replace the default content with the below policy:
&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"Version": "2012-10-17",
"Statement": [
    {
        "Effect": "Allow",
        "Action": "s3:PutObject",
        "Resource": "*"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;



&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Review policy&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Review policy&lt;/strong&gt; page, give the policy a name such as, &lt;code&gt;S3-Put-Instance-Profile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Finally, click &lt;strong&gt;Create policy&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Repeat the process from bullet 2 to create an inline policy to allow the EC2 instance to create log stream and to put log events to CloudWatch. Use the below inline policy:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:CreateLogGroup"
            ],
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;








&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Patch Manager is an effective and powerful tool. It assists cloud teams in managing security risks and addressing vulnerabilities by automating the patching process. By the end of the tutorial, we have learned how to schedule patches for a single managed Amazon Linux 2 instance using Patch Manager and its features. Now that we have this knowledge, we can use it to create a patching model that is more intricate for a fleet of managed instances or on-premises servers.&lt;/p&gt;

&lt;p&gt;I hope this tutorial helps you learn more about Patch Manager and how to use it for your patching models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://omar2cloud.github.io/" rel="noopener noreferrer"&gt;Omar A Omar&lt;/a&gt;&lt;/p&gt;




</description>
      <category>aws</category>
      <category>awscommunitybuilder</category>
      <category>awssystemsmanager</category>
      <category>patchmanager</category>
    </item>
    <item>
      <title>How to Create a FREE Custom Domain Name for Your Lambda URL - A Step by Step Tutorial</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Sun, 10 Apr 2022 05:42:44 +0000</pubDate>
      <link>https://dev.to/omarcloud20/how-to-create-a-free-custom-domain-name-for-your-lambda-url-a-step-by-step-tutorial-47jl</link>
      <guid>https://dev.to/omarcloud20/how-to-create-a-free-custom-domain-name-for-your-lambda-url-a-step-by-step-tutorial-47jl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;On the 6th of April 2022, &lt;a href="https://aws.amazon.com/blogs/aws/announcing-aws-lambda-function-urls-built-in-https-endpoints-for-single-function-microservices/" rel="noopener noreferrer"&gt;AWS announced Lambda Function URLs with a Built-in HTTPS Endpoint for Single-Function Microservices&lt;/a&gt;. The new feature allows users to configure an HTTPS endpoint for a Lambda function without employing any additional resources such as AWS API Gateway or Application Load Balancer. AWS describes it as a highly available, scalable and secure HTTPS service. The newly added feature was well received by the community. I called it, &lt;strong&gt;"one small step for FaaS, one giant leap for Serverless".&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are several articles published to discuss and introduce the newly released feature. &lt;strong&gt;Aj Stuyvenberg&lt;/strong&gt;, an AWS Community Builder, posted a well written article about the Lambda Function URLs titled, &lt;a href="https://dev.to/aws-builders/introducing-lambda-function-urls-4ahd"&gt;Introducing Lambda Function URLs&lt;/a&gt;. Another worth reading article was written by &lt;a href="https://www.serverless.com/blog/aws-lambda-function-urls-with-serverless-framework" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt;. Moreover, &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; at your fingertips to dig a bit deeper. &lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ogr4xcr4hslxw8uzozq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ogr4xcr4hslxw8uzozq.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;With that being said, it's known that the Lambda Function URLs feature does not support custom domain names out of the box; therefore, I have decided to write this step by step tutorial to walk you through how to create a free custom domain name for your Lambda URL. The solution utilizes AWS Route 53 hosted zone, AWS CloudFront distribution, AWS Certificate Manager and Freenom for a free custom domain name registration. In addition, the Lambda function is to host a static page for illustration purposes. To reiterate, the goal of the tutorial is to provide a step by step tutorial on how to create a custom domain name for a Lambda Function URL.  Alright enough babbling, let's get at it. &lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  &lt;a href="https://aws.amazon.com/route53/pricing/#:~:text=Hosted%20Zones%20and%20Records" rel="noopener noreferrer"&gt;AWS Route 53 hosted zone costs $0.50/month&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  The Solution Architecture Diagram:
&lt;/h3&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysi562uvswj3fi9iv8lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysi562uvswj3fi9iv8lw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;h2&gt;
  
  
  Tutorial Steps:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Obtain a free domain name from &lt;a href="https://www.freenom.com/en/index.html?lang=en" rel="noopener noreferrer"&gt;Freenom&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a Lambda function to host a static page &lt;/li&gt;
&lt;li&gt;Create a hosted zone in &lt;a href="https://aws.amazon.com/route53/" rel="noopener noreferrer"&gt;AWS Route 53&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Request a Public SSL Certificate from &lt;a href="https://aws.amazon.com/certificate-manager/" rel="noopener noreferrer"&gt;AWS Certificate Manager&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create an &lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;AWS CloudFront&lt;/a&gt; distribution for the Lambda URL&lt;/li&gt;
&lt;li&gt;Create Route 53 A-record for the CloudFront distribution and confirm the custom domain is functioning &lt;/li&gt;
&lt;li&gt;Do your victory dance 😉&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  throughout the tutorial, leave default options as is unless instructed otherwise.&lt;/p&gt;
&lt;/blockquote&gt;






&lt;h2&gt;
  
  
  Step 1: Obtain a free domain name from Freenom
&lt;/h2&gt;

&lt;p&gt;1- Navigate to &lt;a href="https://www.freenom.com/en/index.html?lang=en" rel="noopener noreferrer"&gt;Freenom&lt;/a&gt; and create a free account.&lt;br&gt;
2- Select &lt;strong&gt;Register a New Domain&lt;/strong&gt; as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps47hsgg22gd1pet22lq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps47hsgg22gd1pet22lq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Check the availability of a domain name of your choosing. In my case, it’s &lt;strong&gt;lambda.cf&lt;/strong&gt; as it's available and free 😉. Now, We are ready to &lt;strong&gt;Checkout&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd50371v0fhmihbbtpwke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd50371v0fhmihbbtpwke.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Choose the Period for your selected domain. It ranges from 1 to 12 months. Then, click &lt;strong&gt;Continue&lt;/strong&gt;.&lt;br&gt;
5- Read the Terms &amp;amp; Conditions and select it once you're done. Finally, click &lt;strong&gt;Complete Order&lt;/strong&gt; to register the free domain name. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations. You have successfully registered a free domain name.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  it might take an hour or so for the domain registration to take effect. &lt;/p&gt;
&lt;/blockquote&gt;






&lt;h2&gt;
  
  
  2 Create a Lambda function to host a static web page
&lt;/h2&gt;

&lt;p&gt;1- From the AWS Lambda console, click &lt;strong&gt;Create function&lt;/strong&gt;.&lt;br&gt;
2- Give your function a name and select &lt;strong&gt;Python 3.9&lt;/strong&gt; for a &lt;strong&gt;Runtime&lt;/strong&gt;. Under &lt;strong&gt;Advanced settings&lt;/strong&gt;, check &lt;strong&gt;Enable function URL&lt;/strong&gt; and select &lt;strong&gt;None&lt;/strong&gt; for &lt;strong&gt;Auth type&lt;/strong&gt;. Then, click &lt;strong&gt;Create function&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzbgc5fyorl6uxagv399.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzbgc5fyorl6uxagv399.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Copy and paste the below Python code into the body of the &lt;strong&gt;lambda_function&lt;/strong&gt; as shown below and then click &lt;strong&gt;Deploy&lt;/strong&gt;. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;


&lt;span class="c1"&gt;#******************************************************************************************
# Author - Omar A Omar
# This lambda function will act as a static web page
#******************************************************************************************
&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusDescription&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200 OK&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;isBase64Encoded&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/html; charset=utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    &amp;lt;html&amp;gt;
        &amp;lt;head&amp;gt;
            &amp;lt;title&amp;gt;Lambda URL&amp;lt;/title&amp;gt;
            &amp;lt;style&amp;gt;
                html, body {
                background-color:rgb(22, 30, 43);
                margin: 10; padding: 10;
                font-family: arial; font-weight: 10; font-size: 1em;
                text-align: center;
                }
                html, h1 {
                color: white;
                font-family: verdana;
                font-size: 150%;
                }
                html, p {
                color: white;
                ont-size: 50%;
                }
            &amp;lt;/style&amp;gt;
        &amp;lt;/head&amp;gt;
        &amp;lt;body&amp;gt;

            &amp;lt;h1&amp;gt;Hello Friend!&amp;lt;/h1&amp;gt;
            &amp;lt;p style=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;color:White;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m a static web page running on a Lambda function&amp;lt;/p&amp;gt;
            &amp;lt;img src=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://media.tenor.com/YhKAJhNKFeoAAAAC/dance-dancing.gif&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; width=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;450&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; /&amp;gt;
        &amp;lt;/body&amp;gt;
    &amp;lt;/html&amp;gt;
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0og4ux061f0d75ir55y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0og4ux061f0d75ir55y1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Click on the &lt;strong&gt;Function URL&lt;/strong&gt; to open the lambda static page. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot4fvggn6ayjc3mu0kj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fot4fvggn6ayjc3mu0kj4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations. Now, you have a Lambda function hosting a static page.&lt;/strong&gt;&lt;/p&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  If you're planning to implement weight traffic shifting and safe deployments, AWS recommends associating an alias with your Lambda function. Please, refer to &lt;a href="https://aws.amazon.com/blogs/aws/announcing-aws-lambda-function-urls-built-in-https-endpoints-for-single-function-microservices/#:~:text=How%20Lambda%20Function%20URLs%20Work" rel="noopener noreferrer"&gt;Announcing AWS Lambda Function URLs: Built-in HTTPS Endpoints for Single-Function Microservices&lt;/a&gt; for more information. &lt;/p&gt;
&lt;/blockquote&gt;








&lt;h2&gt;
  
  
  3 Create a Hosted Zone in AWS Route 53
&lt;/h2&gt;

&lt;p&gt;1- From AWS Route 53 console, click on &lt;strong&gt;Hosted zones&lt;/strong&gt;. Then, click &lt;strong&gt;Create hosted zone&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2yk9c5fgrp42rghwczx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2yk9c5fgrp42rghwczx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Add the &lt;strong&gt;Domain name&lt;/strong&gt; that we have previously obtained from Freenom. In my case, it's &lt;strong&gt;lambda.cf&lt;/strong&gt;. Then, click &lt;strong&gt;Create hosted zone&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9aluadu5pb31i3sbw8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9aluadu5pb31i3sbw8y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Now, we will need to copy the name servers (type &lt;strong&gt;NS&lt;/strong&gt;) to Freenom for our domain name to point to this hosted zone. Let's copy the four name servers, as shown below, to Freenom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59cwxgol5gz2l8nqmya9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59cwxgol5gz2l8nqmya9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- On Freenom, click on &lt;strong&gt;Manage Domain&lt;/strong&gt; and then on &lt;strong&gt;nameservers&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcty7shzbpbn4g53scsdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcty7shzbpbn4g53scsdd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3d8txz3k2n18j2kqz1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3d8txz3k2n18j2kqz1u.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5- Select &lt;strong&gt;Use custom nameservers (enter below)&lt;/strong&gt; and paste the four name servers from our Route 53 hosted zone one at a time. Finally, click &lt;strong&gt;Change Nameservers&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6b25v9ns0n9gehe23r6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6b25v9ns0n9gehe23r6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  Don't copy the &lt;strong&gt;period&lt;/strong&gt; at the end of the name servers from our Route 53 hosted zone. At this point of time, we are done with Freenom, you can signout and close the tab.&lt;/p&gt;
&lt;/blockquote&gt;








&lt;h2&gt;
  
  
  4 Request a Public SSL Certificate from AWS Certificate Manager
&lt;/h2&gt;

&lt;p&gt;1- From the AWS Certificate Manager console, click &lt;strong&gt;Request&lt;/strong&gt;. Keep the default, &lt;strong&gt;Request a public certificate&lt;/strong&gt; selected and click &lt;strong&gt;Next&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flt7eacome91u8bb8lzuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flt7eacome91u8bb8lzuw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Under &lt;strong&gt;Fully qualified domain name&lt;/strong&gt;, type in your domain name . Then, click &lt;strong&gt;Add another name to this certificate&lt;/strong&gt; and add an asterisk and period &lt;code&gt;*.&lt;/code&gt; as a wildcard to your domain name. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt; Adding the wildcard to the second qualified domain name allows us to use the same certificate for other CNAMEs and A-records. If you don't know what a CNAME is and can't differentiate between types of DNS records, it's a great time to fill in the knowledge gap. I would highly recommend taking a look at the &lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more information. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvp37v5817hm9clfhspa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhvp37v5817hm9clfhspa.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No other changes are needed. Click on &lt;strong&gt;Request&lt;/strong&gt; to submit the certificate request. Notice and as shown below, the Status is &lt;strong&gt;Pending validation&lt;/strong&gt;. We still need to complete one more step which is validation.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk2najqm7gwf2dd3qbwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk2najqm7gwf2dd3qbwc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Let's validate the certificate. Under &lt;strong&gt;Certificate ID&lt;/strong&gt;, Click on the newly created certificate ID. We will be directed to the certificate status page and under &lt;strong&gt;Domains&lt;/strong&gt; section, click &lt;strong&gt;Create records in Route 53&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvcrpgd9kfdfctfo572q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvcrpgd9kfdfctfo572q.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- I believe we had to add the validation manually but AWS added this easy to use feature. Now, we automatically add the DNS records to our hosted zone in Route 53 by clicking &lt;strong&gt;Create records&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4x5qw5w1xbfo8c3q3c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4x5qw5w1xbfo8c3q3c6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  If you faced any hiccups adding the validation automatically, copy the CNAME name and CNAME value from under &lt;strong&gt;Domains&lt;/strong&gt; section and use them to manually create a validation CNAME record in our hosted zone.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;5- To confirm that the validation CNAME record has been added correctly, head back to our hosted zone and we should see a CNAME record added to the list of DNS records in our hosted zone as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbv9s00k9yafziekj5fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbv9s00k9yafziekj5fa.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  Don't forget to refresh the records in the hosted zone if the newly added validation CNAME record has not showed up . Also, the &lt;strong&gt;Status&lt;/strong&gt; of the certificate in AWS certificate Manager might take a few minutes to become &lt;strong&gt;Issued&lt;/strong&gt;. Therefore, take a break, grab a coffee, or even do a flip 😄&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3g598h4rx2rvkdww7sg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3g598h4rx2rvkdww7sg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;h2&gt;
  
  
  5 Create an AWS CloudFront Distribution for the Lambda URL
&lt;/h2&gt;

&lt;p&gt;1- From the CloudFront console, click &lt;strong&gt;Create distribution&lt;/strong&gt;. &lt;br&gt;
2- The &lt;strong&gt;Origin domain&lt;/strong&gt; is your Lambda Function URL.&lt;br&gt;
3- Protocol: HTTPS only&lt;br&gt;
4- Leave &lt;strong&gt;Origin path&lt;/strong&gt; empty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80bzal7euxysrsyxmq85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80bzal7euxysrsyxmq85.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5- For the &lt;strong&gt;Name&lt;/strong&gt;, you can change it to a name of your choosing instead of the default name created by CloudFront. &lt;br&gt;
6- For &lt;strong&gt;Viewer&lt;/strong&gt;, select &lt;strong&gt;Redirect HTTP to HTTPS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;7- Under &lt;strong&gt;Settings&lt;/strong&gt;, click &lt;strong&gt;Add item&lt;/strong&gt; and add your domain name. In my case, it's &lt;strong&gt;lambda.cf&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;8- From &lt;strong&gt;Custom SSL certificate-optional&lt;/strong&gt;, let's select our certificate from the drop down menu. Then, click &lt;strong&gt;Create distribution&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0x56kjjyt0jxrv2wtbew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0x56kjjyt0jxrv2wtbew.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  Note, as shown below, CloudFront distribution might take a few minutes to propagate the deployment around the globe; therefore, take a break, grab a cup of coffee, or even do a flip, why not 😄&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmgmodnf6nsgi3g9bdua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmgmodnf6nsgi3g9bdua.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9- Once the deployment is completed, click on your distribution ID. Copy and paste the &lt;strong&gt;Distribution domain name&lt;/strong&gt; into your browser. We should see our static page.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyj4kqvunggram91s9uqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyj4kqvunggram91s9uqj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c8nnmi9y8mwccpqy9qi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c8nnmi9y8mwccpqy9qi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations. We now have our Lambda URL distributed to all AWS edge locations using the world's highest performant, secured and convenient content delivery network. Yep, it's CloudFront.&lt;/strong&gt;&lt;/p&gt;






&lt;h2&gt;
  
  
  6 Create a Route 53 A-record for the CloudFront distribution
&lt;/h2&gt;

&lt;p&gt;1- Let's head once more to Route 53 console and add an alias record. From our hosted zone, click &lt;strong&gt;Create record&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut35pm2902ge2ny9h7o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut35pm2902ge2ny9h7o0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Keep &lt;strong&gt;Record types&lt;/strong&gt; as an A-record. But, we need to toggle the &lt;strong&gt;Value&lt;/strong&gt; to &lt;strong&gt;Alias&lt;/strong&gt;. Then from the drop down menu, select &lt;strong&gt;Alias to CloudFront distribution&lt;/strong&gt;. Here, we will paste our &lt;strong&gt;Distribution domain name&lt;/strong&gt; from CloudFront console as shown below. Bear in mind, the Distribution domain name should show up as a selectable option. This is the distribution we have created for our Lambda URL in CloudFront. Finally, click &lt;strong&gt;Create records&lt;/strong&gt;. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  If you're copying and pasting the CloudFront Distribution domain name, remove the protocol from the URL as shown below (remove &lt;code&gt;https://&lt;/code&gt;).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yfd1ep4a0r6qwd49suk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yfd1ep4a0r6qwd49suk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's check whether or not our custom domain name for the Lambda URL works. It's the moment of truth 🙊 Let's paste our custom domain name into our browser. Wait for it...Wait for it...I hope you're doing your victory dance by now 🥳&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyv8yeyuj70cci0h4tn65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyv8yeyuj70cci0h4tn65.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;I might have congratulated you multiple times during the tutorial but it does not hurt to say it one more time, &lt;strong&gt;CONGRATULATIONS&lt;/strong&gt;💥. &lt;/p&gt;

&lt;p&gt;We have successfully implemented a solution that employs an AWS Route 53 hosted zone, AWS CloudFront distribution, AWS Certificate Manager and Freenom for a free custom domain name registration. Utilizing this solution, we have given our Lambda function URL a custom domain name.&lt;/p&gt;

&lt;p&gt;I hope this tutorial adds value to your learning curve, and I can't wait to see what the future unpacks for Lambda Function URLs. Great things indeed. &lt;/p&gt;

</description>
      <category>serverless</category>
      <category>awscommunitybuilder</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Streaming to AWS Kinesis Data Streams using Kinesis Agent - Step by Step Tutorial</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Tue, 22 Mar 2022 05:06:27 +0000</pubDate>
      <link>https://dev.to/omarcloud20/streaming-to-aws-kinesis-data-streams-using-kinesis-agent-step-by-step-tutorial-5enk</link>
      <guid>https://dev.to/omarcloud20/streaming-to-aws-kinesis-data-streams-using-kinesis-agent-step-by-step-tutorial-5enk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;The purpose of this tutorial is to establish a solid understanding of AWS Kinesis services and how to distinguish between &lt;strong&gt;AWS Kinesis Data Streams&lt;/strong&gt; and &lt;strong&gt;AWS Kinesis Data Firehose&lt;/strong&gt;. If you plan to take the &lt;strong&gt;AWS Solutions Architect Associate&lt;/strong&gt; exam, it's very important to understand the differences and case uses of AWS Kinesis services. &lt;/p&gt;

&lt;p&gt;During the tutorial, we will spin up an Amazon Linux 2 instance and will install &lt;strong&gt;Kinesis Agent&lt;/strong&gt;. We will configure the Kinesis Agent to send randomly generated numbers from a Python code to AWS Kinesis Data Streams. The AWS Kinesis Data Firehose will route the ingested steam to an S3 bucket for retention and further analysis. &lt;/p&gt;

&lt;p&gt;Achieving a high level of comprehension on how to create AWS Kinesis Data Streams, AWS Kinesis Data Firehose and putting data into streams is the goal of the tutorial. The concept; therefore, can be applied to application logs, IoT sensor data, system metrics, videos, audio, analytics and more. &lt;/p&gt;




&lt;h3&gt;
  
  
  Architectural Diagram:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrnvd9irrxx6k1lc8c0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrnvd9irrxx6k1lc8c0g.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What is AWS Kinesis Data Streams?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS Kinesis Data Streams&lt;/strong&gt; is an AWS service that ingests and processes data records from streams in real time. It provides accelerated data feed intake for application logs, metrics, videos, website clickstreams and more. For more information about AWS Kinesis Data Streams, please refer to &lt;a href="https://docs.aws.amazon.com/streams/latest/dev/introduction.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is AWS Kinesis Data Firehose:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS Kinesis Data Firehose&lt;/strong&gt; is an AWS managed service that delivers real-time streaming data records to variety of destinations. These destinations could be:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Examples of AWS Kinesis Data Firehose Destinations&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;S3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Splunk&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;New Relic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Datadog&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenSearch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LogicMonitor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MongoDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;any custom HTTP endpoint&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;more options...&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To read more about AWS Kinesis Data Firehose, please refer to &lt;a href="https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is AWS Kinesis Agent?
&lt;/h2&gt;

&lt;p&gt;It's a standalone Java based application. It collects monitored files/data/logs and send them to Kinesis Data Streams. To read more about Kinesis Agent functionalities, please refer to &lt;a href="https://docs.aws.amazon.com/streams/latest/dev/writing-with-agents.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tutorial Steps:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create an S3 bucket&lt;/li&gt;
&lt;li&gt;Create a Kinesis data stream&lt;/li&gt;
&lt;li&gt;Create a Kinesis data firehose&lt;/li&gt;
&lt;li&gt;Spin up an AWS EC2 instance&lt;/li&gt;
&lt;li&gt;Create an IAM role and attach it to the EC2 instance&lt;/li&gt;
&lt;li&gt;SSH into the EC2 instance&lt;/li&gt;
&lt;li&gt;Install an AWS Kinesis Agent&lt;/li&gt;
&lt;li&gt;Create a folder for the randomly generated numbers Python code&lt;/li&gt;
&lt;li&gt;Create a logfile.log to host the randomly generated numbers&lt;/li&gt;
&lt;li&gt;Configure the AWS Kinesis Agent to monitor the logfile.log&lt;/li&gt;
&lt;li&gt;Test the Python code and tail the logfile.log&lt;/li&gt;
&lt;li&gt;Run the Kinesis Agent&lt;/li&gt;
&lt;li&gt;Tail the Kinesis Agent log file&lt;/li&gt;
&lt;li&gt;Monitor Kinesis Data Streams dashboard&lt;/li&gt;
&lt;li&gt;Monitor Kinesis Data Firehose dashboard&lt;/li&gt;
&lt;li&gt;Download and open the file from the S3 bucket&lt;/li&gt;
&lt;li&gt;Do your victory dance 😉&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 1: Create an S3 bucket
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;From the S3 console, click on &lt;strong&gt;Create bucket&lt;/strong&gt;. Then, give a bucket a unique name and click on &lt;strong&gt;Create bucket&lt;/strong&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  make sure your select &lt;strong&gt;us-east-1&lt;/strong&gt; for region.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkljfoit9d02p3x7ez6av.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkljfoit9d02p3x7ez6av.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmiueyeh94j6agxb39s3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmiueyeh94j6agxb39s3g.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  2 Create a Kinesis Data Stream
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;From the AWS Kinesis Services, select &lt;strong&gt;Kinesis Data Streams&lt;/strong&gt;. It should be selected by default and then click &lt;strong&gt;Create data stream&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3g3obbgtx45izc9twmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3g3obbgtx45izc9twmv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Type in &lt;strong&gt;stream-1&lt;/strong&gt; for the name of the data stream and then leave it as default, &lt;strong&gt;On-demand&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create data stream&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6aqea34qj7isd5l72uhq.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  3 Create a Kinesis Data Firehose
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;While we are still on the AWS Kinesis Services console, let's select &lt;strong&gt;Delivery streams&lt;/strong&gt; from the left hand side menu as shown below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdgyz2pyv84dssx7ajnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdgyz2pyv84dssx7ajnj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on &lt;strong&gt;Create delivery stream&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;For the source, select &lt;strong&gt;Amazon Kinesis Data Streams&lt;/strong&gt;, and for the Destination, select &lt;strong&gt;Amazon S3&lt;/strong&gt; as shown below. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvwh0n5kcrfbav41v3a8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvwh0n5kcrfbav41v3a8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Source settings&lt;/strong&gt;, click &lt;strong&gt;Browse&lt;/strong&gt; and choose the name of the Kinesis data stream, which is &lt;strong&gt;stream-1&lt;/strong&gt;. Then, click &lt;strong&gt;Choose&lt;/strong&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbv70dogp6fmg7urvuio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbv70dogp6fmg7urvuio.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Delivery stream name&lt;/strong&gt; is generated randomly by AWS. Let's leave it as is. &lt;/li&gt;
&lt;li&gt;Skip the &lt;strong&gt;Transformation and convert records&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Destination settings&lt;/strong&gt;, click on &lt;strong&gt;Browse&lt;/strong&gt; and then choose the S3 bucket that we created in step 1.&lt;/li&gt;
&lt;li&gt;Leave Dynamic partitioning to the default selections. &lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Buffer hints, compression and encryption&lt;/strong&gt;, lower the &lt;strong&gt;Buffer interval&lt;/strong&gt; to 60 seconds instead of the 300 seconds default value.&lt;/li&gt;
&lt;li&gt;Skip &lt;strong&gt;Advanced settings&lt;/strong&gt; and click &lt;strong&gt;Create delivery stream&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h1b10be42w2jmpnqvlj.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  4 Spin up an AWS Linux 2 instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;From the EC2 console, click on &lt;strong&gt;Launch Instance&lt;/strong&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F306v9h2rcaoo0agce2nm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F306v9h2rcaoo0agce2nm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: Choose an Amazon Machine Image (AMI): select the first AMI on the list , &lt;strong&gt;Amazon Linux 2 AMI&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Step 2: Choose an Instance Type: keep the default selection which is &lt;strong&gt;t2.micro&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Step 3: Configure Instance Details: keep the default selections.&lt;/li&gt;
&lt;li&gt;Step 4: Add Storage: keep the default selection.&lt;/li&gt;
&lt;li&gt;Step 5: Add Tags: click on &lt;strong&gt;click to add a Name tag&lt;/strong&gt; and type in a value, &lt;strong&gt;Kinesis-Agent&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Step 6: Configure Security Group: &lt;/p&gt;

&lt;p&gt;A. Security group name: Kinesis-Agent-SG&lt;/p&gt;

&lt;p&gt;B. Description: Opens port 22 to my local IP&lt;/p&gt;

&lt;p&gt;C. For the existed SSH rule, change the source to &lt;strong&gt;My IP&lt;/strong&gt;. &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx36pnkd63egyd4o12zla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx36pnkd63egyd4o12zla.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 7: Review Instance Launch: review and click on &lt;strong&gt;Launch&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the modal screen, select &lt;strong&gt;create a new key pair&lt;/strong&gt; and name it &lt;strong&gt;kinesis-agent&lt;/strong&gt;. Then, click on &lt;strong&gt;Download Key Pair&lt;/strong&gt; to save the key pair to your local device. Finally, click &lt;strong&gt;Launch instances&lt;/strong&gt; to spin up the EC2 instance. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpd5epk4wajz35kngxpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpd5epk4wajz35kngxpm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  5 Create an IAM role
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;From IAM console, select &lt;strong&gt;Policies&lt;/strong&gt; from the side menu. &lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Create policy&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;On &lt;strong&gt;Create policy&lt;/strong&gt; screen, select the &lt;strong&gt;JSON&lt;/strong&gt; tab and paste the below policy into the JSON editor. Then, click &lt;strong&gt;Next:Tags&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:PutMetricData",
                "kinesis:PutRecords"
            ],
            "Resource": "*"
        }
    ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu2u5j82y8aqbz6uurka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu2u5j82y8aqbz6uurka.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  The required permission for Kinesis Agent to put data into a stream is &lt;strong&gt;PutRecords&lt;/strong&gt;. The &lt;strong&gt;PutMetricsData&lt;/strong&gt; is needed for CloudWatch to publish metric data points, if CloudWatch monitoring is enabled for the Kinesis Agent. It's best practice to follow principle of least privilege. Please, refer to &lt;a href="https://docs.aws.amazon.com/streams/latest/dev/writing-with-agents.html#prereqs" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; for more information. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Skip the optional &lt;strong&gt;Add tags&lt;/strong&gt; screen by clicking &lt;strong&gt;Next Review&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On &lt;strong&gt;Review policy&lt;/strong&gt;:
A. Name: Kinesis-agent-policy
B. Description: This policy allows Kinesis agent installed on an EC2 to put data points into AWS Kinesis Data Streams. It also allows CloudWatch to publish metrics for the agent.
C. Click on &lt;strong&gt;Create policy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycnge0eed3payqvw863y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycnge0eed3payqvw863y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the IAM console, click on &lt;strong&gt;Roles&lt;/strong&gt; and click &lt;strong&gt;Create role&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Select trusted entity&lt;/strong&gt; screen, choose &lt;strong&gt;AWS service&lt;/strong&gt; and for &lt;strong&gt;Use case&lt;/strong&gt; select &lt;strong&gt;EC2&lt;/strong&gt;. Then, click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3yqqdlaj6j2afgpb9jf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3yqqdlaj6j2afgpb9jf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On &lt;strong&gt;Add permissions&lt;/strong&gt; screen, filter policies by typing &lt;strong&gt;Kinesis-agent-policy&lt;/strong&gt; and hit enter. This is the policy that we have previously created. Select the policy as shown below and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqd0xm80exdvwqdyjp43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqd0xm80exdvwqdyjp43.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;On &lt;strong&gt;Name, review and create&lt;/strong&gt; screen:&lt;/p&gt;

&lt;p&gt;A. Role name: kinesis-agent-role&lt;/p&gt;

&lt;p&gt;B. Description: This role allows the EC2 instance that has Kinesis agent installed to call AWS Kinesis Data Streams. &lt;/p&gt;

&lt;p&gt;C. Scroll to the bottom and click on &lt;strong&gt;Create role&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsz70t5fesseng2buwol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsz70t5fesseng2buwol.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head to EC2 console to attach the newly created IAM role to the EC2 instance as shown on the below image.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqctjzwhh0zxi21nisiwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqctjzwhh0zxi21nisiwf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the &lt;strong&gt;IAM role&lt;/strong&gt; drop down menu, select &lt;strong&gt;Kinesis-agent-role&lt;/strong&gt; and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye4n7w5ikn2yvg4krllm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye4n7w5ikn2yvg4krllm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  6 - 12: Install and Configure AWS Kinesis Agent
&lt;/h2&gt;

&lt;p&gt;6- SSH into the EC2 instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's change the key pair permission. This is done for MacOS and Linux (not Windows). From within the folder, where you have saved the key:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

chmod 400 Kinesis-agent.cer


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Now, we are ready to SSH into the EC2 instance:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ssh -i Kinesis-agent.cer ec2-user@'instance-public-ip-address'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dghjrm60ggscz81u1xb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dghjrm60ggscz81u1xb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;7- Install the AWS Kinesis Agent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before we install the Kinesis Agent, let's update the instance as best practice:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo yum update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Now, let's install the Kinesis Agent:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo yum install –y aws-kinesis-agent


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju6fxj6yrzwgfgisvljl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju6fxj6yrzwgfgisvljl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;8- Create a folder for the randomly generated numbers Python code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's &lt;strong&gt;cd&lt;/strong&gt; into the &lt;strong&gt;opt&lt;/strong&gt; directory to create a folder which will host our Python code and then cd into the folder:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd /opt/
sudo mkdir stream-1 
cd stream-1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotttnx9rb6pxxxhm7gza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotttnx9rb6pxxxhm7gza.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We will open &lt;strong&gt;nano&lt;/strong&gt; text editor to create a file for our Python code:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo nano stream-1.py


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Copy the below Python code and paste it into the nano text editor:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;calendar&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;

&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basicConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;logfile.log&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DEBUG&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;%(asctime)s %(message)s&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;



&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;put_to_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;random&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;id&lt;/span&gt;
              &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Putting to stream: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;calendar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;timegm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;timetuple&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;stream-1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

    &lt;span class="nf"&gt;put_to_stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# wait for 5 second
&lt;/span&gt;    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  The Python code will generate random numbers between 1 to 100, and will output the values every 5 seconds to &lt;strong&gt;logfile.log&lt;/strong&gt;. We will create a logfile.log which has to be in the root folder with the code. The code will timestamp the output values and will tag them with an id = &lt;strong&gt;stream-1&lt;/strong&gt;. Once we have the Kinesis Agent running, we will verify that our architecture is configured correctly by identifying the id = stream-1 from our stream destination which is our S3 bucket. Adding this tag to the data records makes the identification and distinction process efficient, especially if we have multiple producers to the stream. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foritr8wes4nj6biv3tb2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foritr8wes4nj6biv3tb2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To save the Python code using nano:&lt;/p&gt;

&lt;p&gt;A. Hold down &lt;strong&gt;control&lt;/strong&gt; and click &lt;strong&gt;x&lt;/strong&gt;.&lt;br&gt;
B. Click, &lt;strong&gt;y&lt;/strong&gt;&lt;br&gt;
C. Click, &lt;strong&gt;enter&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;9- Create a logfile.log to host our randomly generated numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, we will create the &lt;strong&gt;logfile.log&lt;/strong&gt; which our Python code will log to:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo touch logfile.log


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Let's confirm that we have two files in the folder by running the &lt;strong&gt;ls&lt;/strong&gt; Linux command:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ls


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6iwzdlzz7b37a2bbujx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6iwzdlzz7b37a2bbujx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;10- Configure the AWS Kinesis Agent to monitor the logfile.log:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;agent.json&lt;/strong&gt; is the configuration file for Kinesis Agent. Let's use &lt;strong&gt;cat&lt;/strong&gt; command to view the content of the file. The file resides in the following directory by default:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo cat /etc/aws-kinesis/agent.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;The file contains default values. These values were generated during the installation process of the agent. We will only need to have the following configurations on the json file; therefore, we delete the file first and then create a fresh file:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo rm /etc/aws-kinesis/agent.json 
sudo nano /etc/aws-kinesis/agent.json 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Now, let's copy and paste the below json into the nano text editor and save it:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"cloudwatch.emitMetrics"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"kinesis.endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"https://kinesis.us-east-1.amazonaws.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"flows"&lt;/span&gt;&lt;span class="p"&gt;:[&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"filePattern"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"/opt/stream-1/logfile.log"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"kinesisStream"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"stream-1"&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  If you have changed the location of the Python code, changed the name of the log file or changed the name of the stream, you will have to update the above agent.json file. However, if you have followed the names indicated on the tutorial, no modifications are needed.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5brzkhwv6lna3bdmhnwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5brzkhwv6lna3bdmhnwb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;11- Test the Python code and tail the logfile.log.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prior to running the Kinesis agent, we need to change the owner of the &lt;strong&gt;/opt/stream-1&lt;/strong&gt; folder. By doing so, we are allowing the agent to monitor the log file in the folder, the &lt;strong&gt;logfile.log&lt;/strong&gt; to be specific:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo chown aws-kinesis-agent-user:aws-kinesis-agent-user -R /opt/stream-1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Let's run &lt;strong&gt;ll&lt;/strong&gt; command to ensure the ownership of the folder and the files belong to the agent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcez30mx0ixwxmsthf5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcez30mx0ixwxmsthf5s.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Also, prior to running the agent, we will need to ensure that our Python code is logging data into the logfile.log. Let's run the Python code first. We will run the Python code in the background by running the following command;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo python /opt/stream-1/stream-1.py &amp;amp;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  capture the 'PID' number. It's the process ID assigned to the Python code process. We need this number to stop the code from running in the background. However, you could also use the following command to find the PID for the Python process running in the background, 'sudo ps -x | grep python'.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funiduebwf4snb5l5bxj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funiduebwf4snb5l5bxj3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For right now, don't stop the code but if you would like to stop the code from running in the background, run the following command:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo kill -9 'add PID here'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Another useful linux command to find all running programs in the background:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ps -e


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Let's tail the &lt;strong&gt;logfile.log&lt;/strong&gt; to see if our code is logging every 5 seconds:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo tail -f /opt/stream-1/logfile.log


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; you should see new row of data every 5 seconds as shown below. To exit out of the tail command, hold down &lt;strong&gt;control&lt;/strong&gt; button and click &lt;strong&gt;c&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15rklypzw93rglkhog0p.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;p&gt;12- Run the Kinesis Agent.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, we are ready to run the Kinesis Agent. Let's run the below command to let the agent start monitoring logfile.log and transmitting data points to our stream:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo service aws-kinesis-agent start


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Let's check the status of the agent:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo service aws-kinesis-agent status


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  if the agent started properly, we should see similar output as shown on the below image.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsynh0w7rpk9dyfod28h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsynh0w7rpk9dyfod28h.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Other important agent commands:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo service aws-kinesis-agent status
sudo service aws-kinesis-agent restart
sudo service aws-kinesis-agent stop


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;13- Tail the Kinesis Agent log file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's tail the Kinesis Agent log. It should give us valuable information about the current status of the agent. We will inspect for errors or any permission messages. &lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo tail -f /var/log/aws-kinesis-agent/aws-kinesis-agent.log


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt;  we are looking for a confirmation that the agent has started sending data records to our stream as shown on the below image. Notice, the agent log stated 26 records sent successfully to destination. This is a clear indication that AWS Kinesis Data Stream is ingesting our data records. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndm015mrlc6spu2s7m1e.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;p&gt;14- Monitor Kinesis Data Streams dashboard.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's head to AWS Kinesis Services console. From our Kinesis Data Streams, click on &lt;strong&gt;Monitoring&lt;/strong&gt; and navigate to &lt;strong&gt;Get records - sum(Count)&lt;/strong&gt; dashboard. As shown below, the data records have been ingested by the stream successfully. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ov9zi38mushe64muvx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ov9zi38mushe64muvx1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9o23063vfmwhb089i69.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;p&gt;15- Monitor Kinesis Data Firehose dashboard.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, let's head to our Delivery streams (AWS Kinesis Data Firehose) and click on &lt;strong&gt;Monitoring&lt;/strong&gt; and navigate to &lt;strong&gt;Records read from Kinesis Data Streams (Sum)&lt;/strong&gt; dashboard as well as &lt;strong&gt;Delivery to Amazon S3 success&lt;/strong&gt; dashboard. As shown below, the firehose has routed the real-time data to our S3 bucket successfully. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcthqt285oj97dbcaivl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcthqt285oj97dbcaivl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cml0n21wxoz9yo7yu2c.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;p&gt;16- Download and open the file from the S3 bucket.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Currently, we have confirmed that the data records are flowing into our stream and firehose, we should find the files in our S3 bucket. Let's download one of the file and open it on our local device. We should confirm that the logged data has &lt;strong&gt;stream-1&lt;/strong&gt; id. This is the id we have defined in our randomly generated numbers Python code. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dstifjm6xzgrarsvd4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dstifjm6xzgrarsvd4j.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3a8kjtnuy7o9b3go4uki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3a8kjtnuy7o9b3go4uki.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhydx6g24biyz3jybhrgt.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;If you get this far, I would like congratulate you on your achievement. We have create an AWS Kinesis Data Stream and an AWS Kinesis Data Firehose. We have also employed a Python code to randomly generate numbers and tag records with timestamp and stream name at a 5 second interval. Then, we have installed a Kinesis Agent on an EC2 and configured it to monitor our log file. We have confirmed the agent is running successfully and data records are being ingested by our stream. Lastly, we have ensured that AWS Kinesis Data Firehose has routed the data records to our S3 bucket destination by downloading and inspecting the files for the stream id. &lt;/p&gt;

&lt;p&gt;I hope the tutorial adds greatly to your learning curve. Having accomplished a great deal of understanding about AWS Kinesis Data Streams, AWS Kinesis Data Firehose and AWS Kinesis Agent is a triumph. I wish you all the best in applying these conceptual and practical knowledge to application logs, IoT sensor data, system metrics, videos, audio, analytics and more. Now, off you go, the sky is the limit. &lt;/p&gt;

</description>
      <category>kinesis</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>awscommunitybuilder</category>
    </item>
    <item>
      <title>How to Create and Send AWS CloudWatch Alarm Notifications to PagerDuty</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Wed, 16 Mar 2022 02:46:47 +0000</pubDate>
      <link>https://dev.to/omarcloud20/how-to-create-and-send-aws-cloudwatch-alarm-notifications-to-pagerduty-1n4c</link>
      <guid>https://dev.to/omarcloud20/how-to-create-and-send-aws-cloudwatch-alarm-notifications-to-pagerduty-1n4c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;AWS CloudWatch provides monitoring and observability services for AWS resources. It is built for Site Reliability Engineers, DevOps Engineers and developers in mind. The CloudWatch service collects data and gains insights. It's capable of alerting and resolving operational issues for users. It also gives a system wide visibility into resource utilization. For more information about &lt;a href="https://aws.amazon.com/cloudwatch/" rel="noopener noreferrer"&gt;AWS CloudWatch&lt;/a&gt;, please refer to AWS documentation. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.pagerduty.com/" rel="noopener noreferrer"&gt;PagerDuty&lt;/a&gt;&lt;/strong&gt; is an on-call management and incident response system. The main goal of this tutorial is to walk you through how to configure a CloudWatch alarm, set a threshold and how to send alarm notifications to create and resolve incidents in PagerDuty. I chose 'AutoScaling Group Terminating Instances' as an example for this tutorial. When an EC2 instance in an AutoScaling group is terminated due to any unforeseen reason, a CloudWatch alarm is triggered and ;as a result, an incident is created in PagerDuty for the on-call engineer to inspect and resolve. &lt;/p&gt;

&lt;p&gt;With that being said, the same concept and steps can be applied to create other CloudWatch alarm notifications to PagerDuty for any CloudWatch metrics in AWS namespaces or custom namespaces. Please, refer to &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aws-services-cloudwatch-metrics.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; about AWS services that publish CloudWatch metrics. &lt;/p&gt;

&lt;p&gt;Alright, since there are lots of materials/steps to cover, the tutorial is to the point, short and succinct. Enjoy it. 😉&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architectural Diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvov032fia5eft3d9nq1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvov032fia5eft3d9nq1x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Creating SNS Topic
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;From the SNS console, select &lt;strong&gt;Topics&lt;/strong&gt;. Then, click on &lt;strong&gt;Create topic&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj1nky4n6tlpxvawmmpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj1nky4n6tlpxvawmmpf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Standard&lt;/strong&gt; for the type of the SNS topic and give it a name. Then, click &lt;strong&gt;Create topic&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tiep9i3jzyiaeyreeib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tiep9i3jzyiaeyreeib.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we have successfully created an SNS topic for our AutoScaling Group Alarms (or any other alarm of your choosing). We will subscribe to the topic once we create an alarm. &lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Creating a CloudWatch Alarm:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;From the CloudWatch console, click on &lt;strong&gt;All alarms&lt;/strong&gt; under &lt;strong&gt;Alarms&lt;/strong&gt; section. Then, click &lt;strong&gt;Create alarm&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle84upi4qu5hpjs94kkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle84upi4qu5hpjs94kkw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On &lt;strong&gt;Specify metric and conditions&lt;/strong&gt;, click on &lt;strong&gt;Select metric&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;In the search box under &lt;strong&gt;metrics&lt;/strong&gt;, type in &lt;strong&gt;autoscaling&lt;/strong&gt; and hit enter.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpsf3n6umrmsak68cno4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpsf3n6umrmsak68cno4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Auto Scaling &amp;gt; Group Metrics&lt;/strong&gt; from the options.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down to select &lt;strong&gt;GroupTerminatingInstances&lt;/strong&gt; for your AutoScaling group. In my case, the name of my AutoScaling group is &lt;strong&gt;myAutoScaling&lt;/strong&gt;. Then, click on &lt;strong&gt;Select metric&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59umgqgvhzjixpacen7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59umgqgvhzjixpacen7p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;strong&gt;Specify metric and conditions&lt;/strong&gt; screen is where we would customize and configure the metric alarm by specifying thresholds, period and other conditions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3e55nbp3vhflssj22fq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3e55nbp3vhflssj22fq7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A. Change the period to 1 minute.&lt;/p&gt;

&lt;p&gt;B. I would define the threshold value to &lt;strong&gt;0&lt;/strong&gt;. I would like to be alerted if an EC2 is terminated within my autoscaling group. &lt;/p&gt;

&lt;p&gt;C. The &lt;strong&gt;Datapoints to alarm&lt;/strong&gt; is 1 out of 1.&lt;/p&gt;

&lt;p&gt;D. The &lt;strong&gt;Missing data treatment&lt;/strong&gt; is &lt;strong&gt;Treat missing data as ignore..&lt;/strong&gt;. This for dev environment but it's not recommended for prod environment. No further changes are needed. Nevertheless, you may configure the alert however you like. Next, click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q58uvgx0wm4nye4hmwi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q58uvgx0wm4nye4hmwi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On &lt;strong&gt;Configure actions&lt;/strong&gt; screen, leave the &lt;strong&gt;Alarm state trigger&lt;/strong&gt; to default which is &lt;strong&gt;In alarm&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Select an SNS topic&lt;/strong&gt;, choose &lt;strong&gt;Select an existing SNS topic&lt;/strong&gt;. If you insert the cursor into the &lt;strong&gt;Send a notification to&lt;/strong&gt; textbox, you could select the SNS topic that we created in step 1. But, if it's not showing up for any reason, paste the &lt;strong&gt;Amazon Resource Number (ARN)&lt;/strong&gt; for your SNS topic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tvwyoxagqtghky3ko6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tvwyoxagqtghky3ko6d.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; We will add a second &lt;strong&gt;Alarm state trigger&lt;/strong&gt; by clicking on &lt;strong&gt;Add notification&lt;/strong&gt;. Note, we will have one trigger to send notifications of the &lt;strong&gt;In alarm&lt;/strong&gt; state and a second trigger to resolve the first alarm to the &lt;strong&gt;OK&lt;/strong&gt; state. Therefore, select &lt;strong&gt;OK&lt;/strong&gt; and select the same SNS topic as shown below. Then, click &lt;strong&gt;Next&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dgyj55phjw0v7vhe4fd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dgyj55phjw0v7vhe4fd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Give the alarm a name and description. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z75m7cig2qe1u9yi91d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z75m7cig2qe1u9yi91d.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Finally, review and click on &lt;strong&gt;Create alarm&lt;/strong&gt;. CloudWatch will start gathering more data about the metric/alarm as showing on the &lt;strong&gt;State&lt;/strong&gt;. It will take a minute or two for the state to go to the &lt;strong&gt;OK&lt;/strong&gt; state. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb3srii7buf44hdm87rx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb3srii7buf44hdm87rx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As of right now, we have successfully created a CloudWatch alarm for AutoScaling Group Terminating Instances. Any terminated EC2 instance within the specified AutoScaling group will trigger the alarm. &lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Creating PagerDuty - CloudWatch Integration:
&lt;/h2&gt;

&lt;p&gt;According to &lt;strong&gt;PagerDuty&lt;/strong&gt;, there are two ways to integrate AWS CloudWatch:&lt;/p&gt;

&lt;p&gt;A. Integrate with a PagerDuty Event Rule.&lt;/p&gt;

&lt;p&gt;B. Integrate with a PagerDuty Service. &lt;/p&gt;

&lt;p&gt;For this tutorial, we will utilize PagerDuty service as a method of CloudWatch integration. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On the PagerDuty console, click on services to locate the service you would like to add CloudWatch integration to. Then, click on the service name. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4yhz8hyzqix4462fqb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4yhz8hyzqix4462fqb8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Integrations&lt;/strong&gt; and click on &lt;strong&gt;Add an Integration&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9k3ct4z5likrejbiljp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9k3ct4z5likrejbiljp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Type in &lt;strong&gt;CloudWatch&lt;/strong&gt; in the search box under &lt;strong&gt;Select the integration(s) you use to send alerts to this service&lt;/strong&gt;. Once AWS CloudWatch is selected, click &lt;strong&gt;Add&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcx4rvs78ni65ij7ajsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcx4rvs78ni65ij7ajsl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy the &lt;strong&gt;Integration URL&lt;/strong&gt;. We will use this endpoint to subscribe to our SNS topic. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6oiove2uw9n8dx5aj3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6oiove2uw9n8dx5aj3x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far, we have successfully create a CloudWatch integration with our PagerDuty service. Now, it's time to head back to the AWS console and to SNS console to be specific. &lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Subscribing PagerDuty-CloudWatch Integration to SNS topic:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;On the AWS SNS console, click on the name of the SNS topic we have created in step 1. &lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Create subscription&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5xsqbqxpc69ktlfadwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5xsqbqxpc69ktlfadwf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;HTTPS&lt;/strong&gt; as a protocol and paste the &lt;strong&gt;Integration URL/endpoint&lt;/strong&gt; that we saved from step 3. Then, click &lt;strong&gt;Create subscription&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; make sure that &lt;strong&gt;Enable raw message delivery&lt;/strong&gt; is unchecked. It should be unchecked by default. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yqn7tw4oeib4rbtakhc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yqn7tw4oeib4rbtakhc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Refresh the page until the &lt;strong&gt;Status&lt;/strong&gt; is shown as &lt;strong&gt;Confirmed&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F891afl7fqhjmtcwncxua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F891afl7fqhjmtcwncxua.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have successfully created an SNS topic, a CloudWatch alarm and a PagerDuty integration for CloudWatch. We have also subscribed a PagerDuty integration endpoint to the SNS topic. Don't do your victory dance yet, let's test the alarm and the integration first. 😉 &lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Testing CloudWatch Alarm to PagerDuty Integration:
&lt;/h2&gt;

&lt;p&gt;It has been journey, and I would like to congratulate you for getting this far. It's the moment of truth. The time has come to test our integration, fingers crossed. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In order for us to trigger a CloudWatch alarm for the AutoScaling Group Terminating Instances, we need to terminate an instance within our AutoScaling group. I'm sure you're not doing this in prod environment 😉 I will go ahead and terminate one of the two EC2 instances I have in the AutoScaling group. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6vu2o21hqqkjltsmd8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6vu2o21hqqkjltsmd8k.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Let's head to the CloudWatch console to monitor the alarm status and progress. As we see on the image below, the alarm status is still &lt;strong&gt;OK&lt;/strong&gt;. The delay for the alarm to go off is due to the default &lt;strong&gt;Health check grace period&lt;/strong&gt;, which is 300 seconds (5 mins) by default. We can lower the health check grace period to deem the EC2 unhealthy quicker. Another important term is &lt;strong&gt;scaling cooldown&lt;/strong&gt; period, which prevents AutoScaling groups from launching or terminating instances before the effects of previous activities are visible. We can control the cooldown period by using different scaling policies. Note that the default cooldown period is 300 seconds (5 mins), which can be changed. If you would like to learn more about scaling cooldown period, please refer to &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;. If the alarm did not go off, then lower the &lt;strong&gt;Health check grace period&lt;/strong&gt; to 60 seconds instead of default 300 seconds for your AutoScaling group. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4sqy3p02laod37waax1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4sqy3p02laod37waax1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the instance was deemed unhealthy by the AutoScaling group, the CloudWatch alarm went off as shown below. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkg8dgiewiec03dqh1bs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkg8dgiewiec03dqh1bs.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We should have also received a PagerDuty incident notification by now. This was an indication that we had completed the integration properly. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;, the PagerDuty notification methods whether it's an SMS, an email or a phone call, they are depend on how we configured our escalation policy and notification preference in PagerDuty. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0s1yceujks70u7mvdli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0s1yceujks70u7mvdli.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The last portion of testing is to wait for the AutoScaling group to recover this instance and for the CloudWatch alarm to send a notification for state &lt;strong&gt;OK&lt;/strong&gt; which should resolve the PagerDuty incident automatically. &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The below image shows that the alarm state went to the &lt;strong&gt;OK&lt;/strong&gt; state.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvzvy8smadh9hbqaeus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvzvy8smadh9hbqaeus.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The below images shows that the PagerDuty incident is resolved automatically. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokieu8wwslqxhqsbs658.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokieu8wwslqxhqsbs658.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;During this tutorial, we have successfully created an SNS topic, a CloudWatch alarm for AutoScaling Group Terminating Instances metric and integrated them with PagerDuty CloudWatch integration. &lt;br&gt;
We have tested the configuration and confirmed the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once an instance was terminated, a CloudWatch alarm went off and a PagerDuty incident was created. And, the engineer on-call freaked out. :laugh: &lt;/li&gt;
&lt;li&gt;As soon as the AutoScaling group replaced the instance and deemed healthy, the CloudWatch alarm state went from &lt;strong&gt;In Alarm&lt;/strong&gt; to &lt;strong&gt;OK&lt;/strong&gt; state. As a result, the PagerDuty incident was automatically resolved and the engineer on-call went back to bed 😂&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thank you for taking this journey with me, and I hope it was beneficial to you. Oh, one last thing, now you can do your victory dance 😜 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Facegif.com%2Fwp-content%2Fgifs%2Fbanana-82.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Facegif.com%2Fwp-content%2Fgifs%2Fbanana-82.gif" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GitLab Runner on Raspberry Pi 4 (Build, Push Docker images to Docker Hub using GitLab Runner on GitLab)</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Mon, 28 Feb 2022 04:21:48 +0000</pubDate>
      <link>https://dev.to/omarcloud20/gitlab-runner-on-raspberry-pi-4-build-push-docker-images-to-docker-hub-using-gitlab-runner-on-gitlab-7m3</link>
      <guid>https://dev.to/omarcloud20/gitlab-runner-on-raspberry-pi-4-build-push-docker-images-to-docker-hub-using-gitlab-runner-on-gitlab-7m3</guid>
      <description>&lt;h2&gt;
  
  
  What is GitLab Runner
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;GitLab Runner&lt;/code&gt; is an agent that runs &lt;code&gt;GitLab CI/CD (Continues Integration/Continuous Deployment)&lt;/code&gt; jobs in a pipeline. It's heavily utilized in the world of DevOps to provision and configure infrastructure. The GitLab Runner can be installed as a binary on Linux, MacOS or Windows. It can also be installed as a container.&lt;/p&gt;

&lt;p&gt;On this tutorial, I will walk through installing and configuring GitLab Runner as a container using a &lt;code&gt;Docker&lt;/code&gt; image on a RPI-4..yaay. I will make it very swift to get you started and won't feel a thing. I will not bore you with details, but if there are useful links for further study, I will definitely throw it in. The goal is to get you started with GitLab Runners and the rest is on you.&lt;/p&gt;

&lt;p&gt;As a bonus, we will run our first job of building docker images and push them to &lt;code&gt;Docker hub&lt;/code&gt;. Enough reading, let's get our hands dirty, I mean our keyboards 😉&lt;/p&gt;

&lt;p&gt;To lean more about GitLab Runners, refer to &lt;a href="https://docs.gitlab.com/runner/" rel="noopener noreferrer"&gt;GitLab official documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Sensitive information will be pixilated or erased. This should not alter the quality of the tutorial.&lt;/p&gt;




&lt;h2&gt;
  
  
  Run a GitLab Runner Container on a RPI-4
&lt;/h2&gt;

&lt;p&gt;If you don't have Docker installed on your RPI-4, you may refer to my &lt;a href="https://omar2cloud.github.io/rasp/rpidock/" rel="noopener noreferrer"&gt;Docker on Ubuntu 20.04 Raspberry Pi 4&lt;/a&gt; tutorial.&lt;/p&gt;

&lt;p&gt;1- Let's create a persistent Docker volume for the runner. On your RPI-4 terminal, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create gitlab-runner-volume
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; you can change the volume name &lt;code&gt;gitlab-runner-volume&lt;/code&gt; to any name of your chosen, but you should be consistent as we will use the volume name to bind the container to the RPI-4 local host.&lt;/p&gt;

&lt;p&gt;2- Run the below commands to start GitLab Runner container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name gitlab-runner --restart always --env TZ=US \
   -v  gitlab-runner-volume:/etc/gitlab-runner \
   -v /var/run/docker.sock:/var/run/docker.sock \
   gitlab/gitlab-runner:alpine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only parameters you have the options to change are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the name flag, &lt;code&gt;gitlab-runner&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;the volume name, &lt;code&gt;gitlab-runner-volume&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;the gitlab runner image, &lt;code&gt;gitlab/gitlab-runner:alpine&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;the env flag, &lt;code&gt;TZ=US&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information about the installation process, here is the link &lt;a href="https://docs.gitlab.com/runner/install/docker.html" rel="noopener noreferrer"&gt;GitLab's official documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;3- Capture the GitLab Runner token:&lt;/p&gt;

&lt;p&gt;We do need to head to our &lt;code&gt;GitLab&lt;/code&gt; account and grab the runner's token. If you don't have a GitLab account, you can create one for free. Here is the &lt;a href="https://about.gitlab.com/" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now, we need to create a repository to host our project. From GitLab, let's click on create &lt;code&gt;New project&lt;/code&gt;. On the Create new project, select &lt;code&gt;create blank project&lt;/code&gt;. Then, give the project a name and click &lt;code&gt;Create project&lt;/code&gt;. In my case, I named my repo, &lt;code&gt;GitLab-Runner-RPI-4&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz20pmjuwsatu7xaoaly.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz20pmjuwsatu7xaoaly.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the repository, let's click on settings, CI/CD and then &lt;strong&gt;Expand&lt;/strong&gt; on the Runners section as shown below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksqmmmeo76siv59cricv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksqmmmeo76siv59cricv.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capture the token and save it on a note pad. We will need the token for the next step. &lt;/li&gt;
&lt;li&gt;Disable &lt;code&gt;Enable shared runners for this project&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r235b5sof2apo0a3z3m.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r235b5sof2apo0a3z3m.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Register the GitLab Runner:&lt;/p&gt;

&lt;p&gt;Replace the token place holder after &lt;code&gt;registration-token&lt;/code&gt; with the one from our note pad, and then run the following commands on your RPI-4 terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it -v gitlab-runner-volume:/etc/gitlab-runner gitlab/gitlab-runner:alpine register -n \
  --url https://gitlab.com/ \
  --registration-token GR1348941EDhyNWqfxPttukrGVKJd \
  --executor docker \
  --description "My Docker Runner" \
  --docker-image "docker:20.10.12-dind-alpine3.15" \
  --docker-privileged \
  --docker-volumes "/certs/client"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything went smooth, you should not exhibit no errors as shown below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdksylckynj26jllsn2fc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdksylckynj26jllsn2fc.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To confirm that the GitLab Runner container is running, run the below Docker command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo46q7swmo2690jns1clf.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo46q7swmo2690jns1clf.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright, this is a great indication that we have successfully configured GitLab Runner on RPI-4. Now, we are ready to make some actions 💥&lt;/p&gt;




&lt;h2&gt;
  
  
  Build and Push Docker images to Docker Hub using GitLab Runner on GitLab
&lt;/h2&gt;

&lt;p&gt;1- First of all, we need get a token from our &lt;code&gt;Docker hub&lt;/code&gt; account to avoid using account password. The token is to allow GitLab Runner to authenticate and push Docker images to our Docker hub repository. If you don't know what &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker hub&lt;/a&gt; is, it is the world's largest library for container images. &lt;/p&gt;

&lt;p&gt;On Docker hub account settings, click on &lt;code&gt;Security&lt;/code&gt;, &lt;code&gt;New Access Token&lt;/code&gt; and generate a Read, Write, Delete token. Docker hub free account allows for one active token. Capture the token and save to a note pad. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zhn5y2vjsuhf6oh2lr5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zhn5y2vjsuhf6oh2lr5.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- From GitLab repo, which have we have created previously: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on &lt;code&gt;Clone&lt;/code&gt; and copy the &lt;code&gt;Clone with HTTPS&lt;/code&gt; link.&lt;/li&gt;
&lt;li&gt;On your RPI-4 terminal and in a folder of your chosen, run the below command. This is where the repo will be saved locally:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone `your-Clone-with-HTTPS-link`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; the git clone command will prompt you to enter your GitLab credentials for authentication. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg610wz7anqvmnezldlri.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg610wz7anqvmnezldlri.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- &lt;strong&gt;cd&lt;/strong&gt; into the repo and either use a text editor or the terminal to edit the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. The file should only contain the following code. Be very careful with the indentation. Lastly, save the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: docker:19.03.12

variables:
    IMAGE_NAME: "test:1.0.0"
    DOCKER_TLS_CERTDIR: "/certs"

services:
    - docker:19.03.12-dind
before_script:
  - docker info
build image:
    stage: build 
    script:    
        - docker build -t $REGISTRY_USER/$IMAGE_NAME .
        - docker login -u $REGISTRY_USER -p $DOCKER_HUB_TOKEN
        - docker push $REGISTRY_USER/$IMAGE_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbkxk1ak33xi57eo68x6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbkxk1ak33xi57eo68x6.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- We will create a simple &lt;code&gt;Dockerfile&lt;/code&gt;. The runner will use this Dockerfile to build a Docker image and push it to Docker hub. For right now, we will keep the Dockerfile VERY simple. Once the image is built from the Dockerfile, it will echo &lt;code&gt;Hello World&lt;/code&gt;. &lt;br&gt;
With your preferred text editor or terminal, create and name a file Dockerfile without any extension. Then save it in the same local repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine
RUN echo "Hello World"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In the local repo, we should have two files, &lt;code&gt;Dockerfile&lt;/code&gt; and &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;. A README file might be there as well. &lt;/p&gt;

&lt;p&gt;5- We will need to head back to GitLab repo to create two variables. On Settings, click on CI/CD, then expand &lt;code&gt;Variables&lt;/code&gt;. We will click on &lt;code&gt;Add variable&lt;/code&gt; to create two variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Key: REGISTRY_USER&lt;br&gt;
Value: &lt;code&gt;your Docker hub username NOT the email&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Key: DOCKER_HUB_TOKEN&lt;br&gt;
Value: &lt;code&gt;the token which we generated from Docker hub&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; the flags should be unchecked for simplicity for the two variables. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvrhm2dsb9cznqcdzsbl.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvrhm2dsb9cznqcdzsbl.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aovq0iin1oooag5iusv.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aovq0iin1oooag5iusv.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4no38k4rasgp4643hl5r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4no38k4rasgp4643hl5r.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6- Now, we push the local repo to GitLab repo (remote repo) to start the GitLab Runner pipeline process. Yes, once the remote repo is updated, the runner will be triggered. Let's get back to our terminal on the RPI-4 and from within the local repo, run the following &lt;code&gt;git&lt;/code&gt; commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add -A
git commit -am "first test"
git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you head back to our GitLab repo and click on CI/CD under Pipelines, you would notice that the pipeline is &lt;code&gt;running&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0q2qxzurzz9365fay04w.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0q2qxzurzz9365fay04w.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Moreover, if you click on the running status and you would be directed to the current stage, which is &lt;code&gt;Build&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gwqvnoigx72i2cd42li.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gwqvnoigx72i2cd42li.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, click on &lt;code&gt;build image&lt;/code&gt;, you shall see more details about the current build status. The goal is to see &lt;code&gt;Job succeeded&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fi07dc9m2rwae4bo2fx.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fi07dc9m2rwae4bo2fx.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you see &lt;strong&gt;Job succeeded&lt;/strong&gt; and &lt;strong&gt;passed&lt;/strong&gt; status in green, you can start doing your victory dance 👯 👯&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cnmz2xaud4wo1d1ykm0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cnmz2xaud4wo1d1ykm0.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we head back to Docker hub account, we should see that our Docker image &lt;code&gt;test:1.0.0&lt;/code&gt; has been successfully pushed to the repo. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuif7ojfrktink9py5wq0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuif7ojfrktink9py5wq0.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;By the end of this tutorial, we have successfully configured a GitLab Runner on a RPI-4, created a GitLab repo and registered GitLab Runner to it. Finally, we created a Hello World Docker image from a Dockerfile and had the runner building the Docker image and pushing it to our Docker hub account. &lt;br&gt;
Now, off you go and the sky is the limit.   &lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>gitlab</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>A Free Cloudflare Tunnel running on a Raspberry Pi</title>
      <dc:creator>Omar Omar</dc:creator>
      <pubDate>Tue, 11 May 2021 04:29:34 +0000</pubDate>
      <link>https://dev.to/omarcloud20/a-free-cloudflare-tunnel-running-on-a-raspberry-pi-1jid</link>
      <guid>https://dev.to/omarcloud20/a-free-cloudflare-tunnel-running-on-a-raspberry-pi-1jid</guid>
      <description>&lt;p&gt;Cloudflare is a global network designed to make everything you connect to the Internet secure, private, fast, and reliable. Cloudflare offers a suite of services and &lt;strong&gt;Zero Trust Services&lt;/strong&gt; are the services we will utilize in the following tutorials. Zero Trust Services consist of Teams, Access, Gateway and Browser Isolation. &lt;/p&gt;

&lt;p&gt;Our main goal is to obtain a free domain from &lt;strong&gt;Freenom&lt;/strong&gt; and connect our hosted applications on a Ubuntu 20.04 LTS Raspberry Pi 4 within our local home network via a &lt;strong&gt;Cloudflare Tunnel&lt;/strong&gt; to the world wide web securely without any port-forwarding complications or altering firewall. &lt;/p&gt;

&lt;h3&gt;
  
  
  Tutorial Scenario:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Signup for a free Cloudflare for Teams.&lt;/li&gt;
&lt;li&gt; Install and authenticate cloudflared on a Raspberry Pi 4. &lt;/li&gt;
&lt;li&gt; Create a Cloudflare Tunnel. &lt;/li&gt;
&lt;li&gt; Configure the Tunnel details.&lt;/li&gt;
&lt;li&gt; Create DNS records to route traffic to the Tunnel.&lt;/li&gt;
&lt;li&gt; Run and manage the Tunnel.&lt;/li&gt;
&lt;li&gt; Add a Zero Trust policy.&lt;/li&gt;
&lt;li&gt; Run Tunnel as a service.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Signup for a free Cloudflare for Teams:
&lt;/h2&gt;

&lt;p&gt;Navigate to &lt;a href="https://dash.cloudflare.com/sign-up" rel="noopener noreferrer"&gt;link&lt;/a&gt; and signup for a free account. Cloudflare has a well documented &lt;a href="https://developers.cloudflare.com/cloudflare-one/setup" rel="noopener noreferrer"&gt;Get started&lt;/a&gt; site to walk you through the setup process. For this step, you don't need to go beyond signing up. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Install and authenticate Cloudflared on a Raspberry Pi 4:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; First of all, if you’d like to check your device’s architecture, run the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uname -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Navigate to &lt;a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/installation" rel="noopener noreferrer"&gt;link&lt;/a&gt; site to download the proper package for your architecture. In my case, I will install the Cloudflared daemon on my RPI-4, which is an &lt;strong&gt;arm64&lt;/strong&gt; architecture.&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  arm64 architecture (64-bit Raspberry Pi 4):
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -O cloudflared https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64
sudo mv cloudflared /usr/local/bin
sudo chmod +x /usr/local/bin/cloudflared
cloudflared -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  AMD64 architecture (Debian/Ubuntu):
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo apt-get install ./cloudflared-stable-linux-amd64.deb
cloudflared -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  armhf architecture (32-bit Raspberry Pi):
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz
tar -xvzf cloudflared-stable-linux-arm.tgz
sudo cp ./cloudflared /usr/local/bin
sudo chmod +x /usr/local/bin/cloudflared
cloudflared -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Once we have installed Cloudflared successfully, we will run the following command to authenticate the cloudflared daemon to our Cloudflare account.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloudflared login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the above command will launch the default browser window and prompt you to login to your Cloudflare account. Then, you will be prompted to select a hostname site, which we have create previously in Part &lt;a href="https://omar2cloud.github.io/cloudflare/domain/" rel="noopener noreferrer"&gt;link&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;As soon as you have chosen your hostname, Cloudflare will download a certificate file to authenticate &lt;strong&gt;Cloudflared&lt;/strong&gt; with Cloudflare's network.&lt;/p&gt;

&lt;p&gt;Notice:&lt;br&gt;
The cert.pem gives Cloudflared the capabilities to create tunnels and modify DNS records in the account. Once you have created a named Tunnel, you no longer need the cert.pem file to run that Tunnel and connect it to Cloudflare’s network. However, hte cert.pem file is still required to create additional Tunnels, list existing tunnels, manage DNS records, or delete Tunnels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r7gjapm5ulrlek9fctd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r7gjapm5ulrlek9fctd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqphn6fga3fzrz2peo8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqphn6fga3fzrz2peo8b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13zj39r5fnsoh2lf7tcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13zj39r5fnsoh2lf7tcp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1irpxfo2nwfigox9xbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1irpxfo2nwfigox9xbq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Once authorization is completed successfully, your cert.pem will be download to the default directory as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6b4tkigglqghfwb0w46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6b4tkigglqghfwb0w46.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're running a headless server (no monitor or keyboard), you could copy the authentication URL and paste it in a browser manually.&lt;/p&gt;

&lt;p&gt;Note:&lt;br&gt;
The credentials file contains a secret scoped to the specific Tunnel UUID which establishes a connection from cloudflared to Cloudflare’s network. cloudflared operates like a client and establishes a TLS connection from your infrastructure to Cloudflare’s edge.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Create a Cloudflare Tunnel:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Now, we are ready to create a &lt;strong&gt;Cloudflare Tunnel&lt;/strong&gt; that will connect &lt;strong&gt;Cloudflared&lt;/strong&gt; to Cloudflare's edge. Utilizing the following command will create a Tunnel with tht name and generate an ID credentials file for it. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Prior to creating the Tunnel, you may need to exit the Command Line (CL). Next, let create the Tunnel.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: replace  with any name of your choosing for the Tunnel.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloudflared tunnel create &amp;lt;NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the Tunnel is created, a credential file is generated. It's a JSON file that has the Universally Unique Identifier (UUID) assigned for the Tunnel.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: although the Tunnel is created, the connection is not established yet.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhcuw8ya0tibu3qbvy5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhcuw8ya0tibu3qbvy5z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Configure the Tunnel details:
&lt;/h2&gt;

&lt;p&gt;Although we can configure the Tunnel run in an add hoc mode, we will go over creating a configuring the Tunnel to automatically run it as a service. &lt;/p&gt;

&lt;p&gt;Cloudflare utilizes a &lt;strong&gt;configuration file&lt;/strong&gt; to determine how to route traffic. The configuration file contains keys and values, which is written in &lt;strong&gt;YAML&lt;/strong&gt; syntax. You may need to modify the following keys and values to meet your configuration file requirements:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Keys&lt;/th&gt;
&lt;th&gt;Values&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;tunnel&lt;/td&gt;
&lt;td&gt;Tunnel name or Tunnel UUID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;credentials-file&lt;/td&gt;
&lt;td&gt;location credentials file (JSON)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hostname&lt;/td&gt;
&lt;td&gt;subdomain.hostname.xxx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;service&lt;/td&gt;
&lt;td&gt;url - &lt;a href="http://localhost:8000" rel="noopener noreferrer"&gt;http://localhost:8000&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;service&lt;/td&gt;
&lt;td&gt;http_status:404&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;port of your app&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;By default, on Linux systems, Tunnel expects to find the configuration file in &lt;em&gt;~/.cloudflared&lt;/em&gt;, &lt;em&gt;/etc/cloudflared&lt;/em&gt; and &lt;em&gt;/usr/local/etc/cloudflared&lt;/em&gt; in that order. &lt;br&gt;
Let's create our config file and save in the default expected directory for this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano ~/.cloudflared/config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano home/&amp;lt;username&amp;gt;/.cloudflared/config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3iwznf1xgq9rtt13hbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3iwznf1xgq9rtt13hbp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, we will paste our keys and values as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tunnel: 1082b601-bce9-45e4-b6ae-f19020e7d071
credentials-file: /root/.cloudflared/1082b601-bce9-45e4-b6ae-f19020e7d071.json

ingress:
  - hostname: test.mytunnel.ml
    service: http://localhost:80
  - service: http_status:404
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhchj3ydirj3ulji0fwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhchj3ydirj3ulji0fwq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note:&lt;br&gt;
If you don't have any application ready to test the Tunnel, I'd suggest installing NGINX web server and port mapping it to port 80 as I've done in the configuration file. &lt;/p&gt;

&lt;p&gt;How to install NGINX web server on RPI-4:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install nginx 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrgkfzms4ivog5pb8n21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrgkfzms4ivog5pb8n21.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the installation is completed, open a browser and type in: &lt;em&gt;localhost:80&lt;/em&gt;. If the NGINX web server is installed properly, you shall see it running with its default index.html as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjavdab75onvff1p3t64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjavdab75onvff1p3t64.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's make sure that we have all files in this directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls -al
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we have configured all required files to run the Tunnel in the default directory. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdquek4r9bbr8fniy7nq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdquek4r9bbr8fniy7nq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note, if you'd like to save the config.yml file in a different location ( we will refrain from using this method for this tutorial), you will have to point to that directory during the &lt;strong&gt;run&lt;/strong&gt; command by using the following:&lt;br&gt;
&lt;em&gt;cloudflared tunnel --config path/config.yml run UUID or Tunnel Name&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It's very import to specify &lt;em&gt;--config&lt;/em&gt; to change default directory for the config file. For more information about the &lt;a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/config" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Create DNS records to route traffic to the Tunnel:
&lt;/h2&gt;

&lt;p&gt;Cloudflare can route traffic to our Tunnel connection using a DNS record or a loud balancer. We will configure a DNS CNAME record to point to our Tunnel subdomain. There are two ways to acheive this mission:&lt;/p&gt;

&lt;p&gt;A.  &lt;strong&gt;Manually:&lt;/strong&gt; navigate to the DNS tab on Cloudflare Dashboard, create a new CNAME record and add your subdomain of your Tunnel as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Type: CNAME&lt;/li&gt;
&lt;li&gt;Name: any subdomain name of your choosing.&lt;/li&gt;
&lt;li&gt;Target: consists of two parts: &amp;lt;&lt;em&gt;UUID&lt;/em&gt;&amp;gt; and &amp;lt;&lt;em&gt;cfargotunnel.com&lt;/em&gt;&amp;gt; such as, &lt;strong&gt;&amp;lt;&lt;em&gt;UUID.cfargotunnel.com&lt;/em&gt;&amp;gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;B. &lt;strong&gt;Programmatically:&lt;/strong&gt; run the following command from the command line. This command will generate a CNAME record that points to the subdomain of a specific Tunnel. The result is the same as creating a CNAME record from the dashboard as shown in step A.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloudflared tunnel route dns &amp;lt;UUID or NAME&amp;gt; test.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn77fqrqk8rx2mz5z0g1i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn77fqrqk8rx2mz5z0g1i.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: unlike the previous Argo Tunnel architecture, this DNS record will not be deleted if the Tunnel disconnects.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Run and manage the Tunnel:
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;run&lt;/strong&gt; command will connect cloudflared to Cloudflare's edge network using the configuration created in step 4. We will not specify a configuration file location so Cloudflared retrieves it from the default location, which is &lt;em&gt;~/.cloudflared/config.yml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloudflared tunnel run &amp;lt;UUID&amp;gt; or &amp;lt;Tunnel Name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszi9axu0f2zk06h227hk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszi9axu0f2zk06h227hk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the config.yml file is not placed in the default directory, we need to pinpoint to its location to run the Tunnel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloudflared tunnel --config path/config.yml run &amp;lt;NAME&amp;gt; or &amp;lt;UUID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can review the list of Tunnels we have created by running the following command:&lt;/p&gt;

&lt;h4&gt;
  
  
  Cloudflared Commands:
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Functions&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Create a Tunnel&lt;/td&gt;
&lt;td&gt;cloudflared tunnel run &amp;lt;&lt;em&gt;NAME&lt;/em&gt;&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;List Tunnels&lt;/td&gt;
&lt;td&gt;cloudflared tunnel list&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stop Tunnel&lt;/td&gt;
&lt;td&gt;cloudflared tunnel stop &amp;lt;&lt;em&gt;NAME&lt;/em&gt;&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Restart Tunnel&lt;/td&gt;
&lt;td&gt;cloudflared tunnel restart &amp;lt;&lt;em&gt;NAME&lt;/em&gt;&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delete Tunnel&lt;/td&gt;
&lt;td&gt;cloudflared tunnel delete &amp;lt;&lt;em&gt;NAME&lt;/em&gt;&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Force Delete Tunnel&lt;/td&gt;
&lt;td&gt;cloudflared tunnel delete -f &amp;lt;&lt;em&gt;NAME&lt;/em&gt;&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Show each Cloudflared info&lt;/td&gt;
&lt;td&gt;cloudflared tunnel info &amp;lt;&lt;em&gt;NAME&lt;/em&gt;&amp;gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: stopping Cloudflared will not delete the Tunnel or the DNS record created. Although Tunnel deletes DNS records after 24-48 hours of a Tunnel being unregistered, it does not delete TLS certificates on your behalf once the Tunnel is shut down. If you want to clean up a Tunnel you’ve shut down, you can delete DNS records in the DNS editor and revoke TLS certificates in the Origin Certificates section of the SSL/TLS tab of the Cloudflare dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To update Cloudflared:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cloudflared update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To uninstall Cloudflared&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cloudflared service uninstall 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Add a Zero Trust policy:
&lt;/h2&gt;

&lt;p&gt;Now, we are ready to head back to Teams dashboard to configure our application and create a Zero Trust Policy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; On Teams dashboard, navigate to the Application tab and click on Add an application.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4mrzej8ntj3vw1i9rmu.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4mrzej8ntj3vw1i9rmu.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Select Self-hosted.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6wakxwqgv7qmh9zb7pf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6wakxwqgv7qmh9zb7pf.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Choose an application name, Session Duration, subdomain and Application domain. Then, click on Next. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Notice that the Tunnel duration ranges from 15 mins to 1 month.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujgx8d047ovxfol8gln0.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujgx8d047ovxfol8gln0.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Add a name to the rule and select &lt;strong&gt;Bypass&lt;/strong&gt; as a Rule action. On Configure a rule, include Everyone. This rule allows everyone to view our NGINX site at &lt;strong&gt;test.mytunnel.ml&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j52bvn1pneavosa6co8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j52bvn1pneavosa6co8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; In the Advanced settings, enable automatic cloudflared authentication and browser rendering.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhh2kz15vgpglnxp2k1ia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhh2kz15vgpglnxp2k1ia.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqjko80vpbl62fqv3s94.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqjko80vpbl62fqv3s94.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, our application is now available in Cloudflare Access and is part of our Application list. We can navigate to a browser and type in our url &lt;strong&gt;test.MyTunnel.ml&lt;/strong&gt; and if our Tunnel is established correctly, we shall see our NGINX web server running as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffia71frydgoxjh2axzt3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffia71frydgoxjh2axzt3.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Run Tunnel as a service:
&lt;/h2&gt;

&lt;p&gt;By running the following command, the Tunnel can be installed as a system service which allows the Tunnel to run at boot automatically as launch daemon. By default, the Tunnel expects to find the configuration file in the default directory, &lt;em&gt;~/.cloudflared/config.yml&lt;/em&gt; but to run Tunnel as a service, we might need to move the config.yml file in &lt;em&gt;~/etc/cloudflared/&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We can employ the move &lt;strong&gt;mv&lt;/strong&gt; command to do the job: &lt;em&gt;mv &amp;lt;*path/config.yml&amp;gt; to &amp;lt;&lt;/em&gt;/etc/cloudflared/*&amp;gt;&lt;/p&gt;

&lt;p&gt;The below command is in my case with my RPI-4 and how I moved the config file to /etc/cloudflared/&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv /home/p2/.cloudflared/config.yml /etc/cloudflared/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we are ready to run Tunnel as a service utilizing the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cloudflared service install 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;We have successfully established a secure Cloudflare Tunnel that links our locally hosted NGINX web server to Cloudflare's network without requiring any public IP address, port-forwarding or punching through a firewall. We have also configured the Tunnel as a service to start at boot, and now we have our NGINX web server associated and accessible via our domain name, test.MyTunnel.ml&lt;/p&gt;

&lt;p&gt;Best of luck with you future project. Cheers!!&lt;/p&gt;

</description>
      <category>cloudflare</category>
      <category>raspberrypi</category>
      <category>ubuntu</category>
      <category>tunnel</category>
    </item>
  </channel>
</rss>
