<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mariano Ramborger</title>
    <description>The latest articles on DEV Community by Mariano Ramborger (@marianoramborger).</description>
    <link>https://dev.to/marianoramborger</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marianoramborger"/>
    <language>en</language>
    <item>
      <title>Serverless (AWS) - How to trigger a Lambda function from a S3 bucket, and use it to update a DDB.</title>
      <dc:creator>Mariano Ramborger</dc:creator>
      <pubDate>Mon, 14 Jun 2021 20:32:50 +0000</pubDate>
      <link>https://dev.to/marianoramborger/serverless-aws-how-to-trigger-a-lambda-function-from-a-s3-bucket-and-use-it-to-update-a-ddb-2m0a</link>
      <guid>https://dev.to/marianoramborger/serverless-aws-how-to-trigger-a-lambda-function-from-a-s3-bucket-and-use-it-to-update-a-ddb-2m0a</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is a Lambda on AWS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are a developer or have some programming background, you may be familiar with Lambda functions. &lt;br&gt;
They are a useful feature in many of the most popular modern languages, and they basically are functions that lack an identifier, which is why they are also called anonymous functions. This last name may not be appropriate for certain languages, where not all anonymous functions are necessary Lambda functions, but that’s a discussion for another day.&lt;br&gt;
Aws Lambda takes inspiration from this concept, but it’s fundamentally different. For starters, AWS Lambda is a service that lets you run code without having to provision servers, or even EC2 instances. As such, it is one of the cornerstones of Amazon’s serverless services, alongside Api-Gateway, DynamoDB and S3, to name a few.&lt;br&gt;
Aws Lambda functions are event-driven architecture, and as such they can be triggered and executed by a wide variety of events. On this article, we will create a Lambda function and configure it to trigger based whenever an object is put inside of an S3 bucket. Then we’ll use it to update a DynamoDB table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kmnqetk6901sepx3no3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kmnqetk6901sepx3no3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ll start on the AWS console. For this little project, we’ll need an S3 bucket, so let’s go ahead and create it. &lt;br&gt;
In case you haven’t heard of it, S3 stands for Simple Storage Service, and allows us to store data as objects inside of structures named “Buckets”. &lt;br&gt;
So we’ll search for S3 on the AWS console and create a bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ns7vbpuw45aq5u39yr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ns7vbpuw45aq5u39yr1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today we don’t need anything fancy so let’s just give it a unique name.&lt;br&gt;
We will also need a DynamoDB table. DynamoDB is Amazon’s serverless no-SQL database solution, and it we will use it to keep track of the files uploaded in our bucket.&lt;br&gt;
We’ll go into “Create Table”, then give it a name and a partition key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4ep5diqf7vyt14if3y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4ep5diqf7vyt14if3y6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We could also give it a sort key, like the bucket where the file is stored. That way we could receive a list ordered by bucket when we do a query or scan. However, that is not our focus today. And since we are using just the one bucket, we’ll leave it unchecked.&lt;/p&gt;

&lt;p&gt;Next, we’ll search for Lambda on the console. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r47ma9y9b177paqr2lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r47ma9y9b177paqr2lh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can choose to make a lambda from scratch, or build it using one of the many great templates provided by Amazon. We can also source it from a container or a repository. This time we’ll start from scratch and make with the latest version of Node.js.&lt;br&gt;
We will be greeted by a function similar to this one&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z3lgb0p02zjzm8tn1jk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z3lgb0p02zjzm8tn1jk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s just a simple function that returns an “OK” whenever it’s triggered. Those of you familiar with Node may be wondering about that “handler” on the first line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anatomy of a Lambda function.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The handler is sort of an entry point for your Lambda. When the function is invoked, the handler method will be executed. The handler function also receives two parameters: event and context.&lt;br&gt;
Event is JSON object which provides event-related information, which your Lambda can use to perform tasks. Context, on the other hand, contains methods and properties that allow you to get more information about your function and its invocation.&lt;br&gt;
If at any point you want to test what your Lambda function is doing, you can press on the “Test” button, and try its functionality with parameters of your choosing.&lt;br&gt;
We know we want our Lambda to be triggered each time somebody uploads an object to a specific S3 buckets. We could configure the bucket, then upload a file to see kind of data that our function receives. Fortunately, AWS already has a test even that mimics an s3 PUT operation trigger.&lt;br&gt;
We just need to select the dropdown on the “Test” button, and select the “Amazon S3 Put” template”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var AWS = require("aws-sdk");

exports.handler = async (event) =&amp;gt; {

        //Get event info.
        let bucket = event.Records[0].s3
        let obj = bucket.object
        let params = {
            TableName: "files",
            Item : {
                    file_name : obj.key,
                    bucket_name: bucket.bucket.name,
                    bucket_arn: bucket.bucket.arn
            }
        }
        //Define our target DB.
          let newDoc = new AWS.DynamoDB.DocumentClient(
            {
                region: "us-east-1"});
        //Send the request and return the response.
        try {
             let result = await newDoc.put(params).promise()

             let res = {
                 statusCode : result.statusCode,
                 body : JSON.stringify(result)
             }
             return res
        }
        catch (error) {

             let res = {
                 statusCode : error.statusCode,
                 body : JSON.stringify(error)
             }
             return res
        }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple code will capture event info and enter it into our DynamoDB table.&lt;br&gt;
However, if you go ahead and execute it, you will probably receive an error stating that our Lambda function doesn’t have the required permission to run this operation.&lt;br&gt;
So we’ll just head to IAM. Hopefully we’ll be security-conscious and create a role with the minimum needed permissions for our Lambda.&lt;/p&gt;

&lt;p&gt;Just kidding, for this tutorial full DynamoDB access will do!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdgpw2s7ak2fojbyp675.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdgpw2s7ak2fojbyp675.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now everything should work. Let’s publish our Lambda and upload a file to our S3 bucket and test it out!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wpi4220kjz0bhbxjcek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wpi4220kjz0bhbxjcek.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything is working as it should!&lt;br&gt;
Now, what we did was pretty basic, but this basic workflow (probably backed by Api Gateway or orchestrated with Step Functions) can make the basis of a pretty complex serverless app.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>Serverless with AWS - How to trigger a Lambda function from a S3 bucket, and use it to update a DDB.</title>
      <dc:creator>Mariano Ramborger</dc:creator>
      <pubDate>Mon, 14 Jun 2021 20:23:25 +0000</pubDate>
      <link>https://dev.to/marianoramborger/serverless-with-aws-triggering-a-lambda-from-a-s3-bucket-and-use-it-to-update-a-ddb-2b6l</link>
      <guid>https://dev.to/marianoramborger/serverless-with-aws-triggering-a-lambda-from-a-s3-bucket-and-use-it-to-update-a-ddb-2b6l</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is a Lambda on AWS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are a developer or have some programming background, you may be familiar with Lambda functions. &lt;br&gt;
They are a useful feature in many of the most popular modern languages, and they basically are functions that lack an identifier, which is why they are also called anonymous functions. This last name may not be appropriate for certain languages, where not all anonymous functions are necessary Lambda functions, but that’s a discussion for another day.&lt;br&gt;
Aws Lambda takes inspiration from this concept, but it’s fundamentally different. For starters, AWS Lambda is a service that lets you run code without having to provision servers, or even EC2 instances. As such, it is one of the cornerstones of Amazon’s serverless services, alongside Api-Gateway, DynamoDB and S3, to name a few.&lt;br&gt;
Aws Lambda functions are event-driven architecture, and as such they can be triggered and executed by a wide variety of events. On this article, we will create a Lambda function and configure it to trigger based whenever an object is put inside of an S3 bucket. Then we’ll use it to update a DynamoDB table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kmnqetk6901sepx3no3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kmnqetk6901sepx3no3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ll start on the AWS console. For this little project, we’ll need an S3 bucket, so let’s go ahead and create it. &lt;br&gt;
In case you haven’t heard of it, S3 stands for Simple Storage Service, and allows us to store data as objects inside of structures named “Buckets”. &lt;br&gt;
So we’ll search for S3 on the AWS console and create a bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ns7vbpuw45aq5u39yr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ns7vbpuw45aq5u39yr1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today we don’t need anything fancy so let’s just give it a unique name.&lt;br&gt;
We will also need a DynamoDB table. DynamoDB is Amazon’s serverless no-SQL database solution, and it we will use it to keep track of the files uploaded in our bucket.&lt;br&gt;
We’ll go into “Create Table”, then give it a name and a partition key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4ep5diqf7vyt14if3y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4ep5diqf7vyt14if3y6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We could also give it a sort key, like the bucket where the file is stored. That way we could receive a list ordered by bucket when we do a query or scan. However, that is not our focus today. And since we are using just the one bucket, we’ll leave it unchecked.&lt;/p&gt;

&lt;p&gt;Next, we’ll search for Lambda on the console. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r47ma9y9b177paqr2lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r47ma9y9b177paqr2lh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can choose to make a lambda from scratch, or build it using one of the many great templates provided by Amazon. We can also source it from a container or a repository. This time we’ll start from scratch and make with the latest version of Node.js.&lt;br&gt;
We will be greeted by a function similar to this one&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z3lgb0p02zjzm8tn1jk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z3lgb0p02zjzm8tn1jk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s just a simple function that returns an “OK” whenever it’s triggered. Those of you familiar with Node may be wondering about that “handler” on the first line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anatomy of a Lambda function.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The handler is sort of an entry point for your Lambda. When the function is invoked, the handler method will be executed. The handler function also receives two parameters: event and context.&lt;br&gt;
Event is JSON object which provides event-related information, which your Lambda can use to perform tasks. Context, on the other hand, contains methods and properties that allow you to get more information about your function and its invocation.&lt;br&gt;
If at any point you want to test what your Lambda function is doing, you can press on the “Test” button, and try its functionality with parameters of your choosing.&lt;br&gt;
We know we want our Lambda to be triggered each time somebody uploads an object to a specific S3 buckets. We could configure the bucket, then upload a file to see kind of data that our function receives. Fortunately, AWS already has a test even that mimics an s3 PUT operation trigger.&lt;br&gt;
We just need to select the dropdown on the “Test” button, and select the “Amazon S3 Put” template”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var AWS = require("aws-sdk");

exports.handler = async (event) =&amp;gt; {

        //Get event info.
        let bucket = event.Records[0].s3
        let obj = bucket.object
        let params = {
            TableName: "files",
            Item : {
                    file_name : obj.key,
                    bucket_name: bucket.bucket.name,
                    bucket_arn: bucket.bucket.arn
            }
        }
        //Define our target DB.
          let newDoc = new AWS.DynamoDB.DocumentClient(
            {
                region: "us-east-1"});
        //Send the request and return the response.
        try {
             let result = await newDoc.put(params).promise()

             let res = {
                 statusCode : result.statusCode,
                 body : JSON.stringify(result)
             }
             return res
        }
        catch (error) {

             let res = {
                 statusCode : error.statusCode,
                 body : JSON.stringify(error)
             }
             return res
        }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple code will capture event info and enter it into our DynamoDB table.&lt;br&gt;
However, if you go ahead and execute it, you will probably receive an error stating that our Lambda function doesn’t have the required permission to run this operation.&lt;br&gt;
So we’ll just head to IAM. Hopefully we’ll be security-conscious and create a role with the minimum needed permissions for our Lambda.&lt;/p&gt;

&lt;p&gt;Just kidding, for this tutorial full DynamoDB access will do!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdgpw2s7ak2fojbyp675.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdgpw2s7ak2fojbyp675.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now everything should work. Let’s publish our Lambda and upload a file to our S3 bucket and test it out!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wpi4220kjz0bhbxjcek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wpi4220kjz0bhbxjcek.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything is working as it should!&lt;br&gt;
Now, what we did was pretty basic, but this basic workflow (probably backed by Api Gateway or orchestrated with Step Functions) can make the basis of a pretty complex serverless app.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>lambda</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>AWS - Load Balancing</title>
      <dc:creator>Mariano Ramborger</dc:creator>
      <pubDate>Mon, 03 May 2021 16:32:13 +0000</pubDate>
      <link>https://dev.to/marianoramborger/aws-load-balancing-2e2c</link>
      <guid>https://dev.to/marianoramborger/aws-load-balancing-2e2c</guid>
      <description>&lt;p&gt;One of the key features of cloud computing is scalability – the capability to scale a system according to demand.&lt;/p&gt;

&lt;p&gt;The ability to grow (or shrink) the &lt;a href="https://dev.to/marianoramborger/aws-dev-associate-ec2-part-1-5flg"&gt;EC2 instances&lt;/a&gt; that contain and serve our applications is a powerful tool, but it can’t reach its full potential unless there is a way to distribute the incoming requests among them. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hbdONWm8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mgm5zyize7qp7p5x1lxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hbdONWm8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mgm5zyize7qp7p5x1lxg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No sense in having 10 web servers if all the traffic ends up in server #1. Introducing a “Load Balancer” will help us avoid this scenario.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elastic Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Load balancing is, simply put, the process of distributing the workload among your resources, to avoid overworking your infrastructure, and to maximize its efficiency.&lt;br&gt;
AWS offers us Elastic Load Balancer (ELB), a managed service that can automatically distribute traffic among our resources, even in different availability zones. By positioning ELB as a single point of contact for clients, it can sort their requests and route them to the appropriate targets. ELB can even monitor the health of our resources, and avoid routing to unhealthy ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bHKR-nT_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1kognrmkaxyjvj98yun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bHKR-nT_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1kognrmkaxyjvj98yun.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS gives us, however, a few options when it comes to set up our Load Balancer. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This Load Balancer is used to manage HTTP &amp;amp; HTTPs traffic. It works on the layer 7 of the OSI model which means it’s application-aware. Thanks to this, it supports advanced request routing, letting us target web-servers based on the request headers.&lt;br&gt;
But how does it work?&lt;br&gt;
An Application Load Balancer is composed of “Listeners” and “Target Groups”. &lt;br&gt;
Listener, well… listen for requests from clients on certain ports, which you can define. You can configure a series of &lt;em&gt;rules&lt;/em&gt; for the listeners, which will define how they will route those requests. Each rule is composed of a priority, conditions, and actions to be performed when those conditions are met.&lt;br&gt;
Target groups are collections of registered possible destinations (EC2 instances, for example), where the requests can be routed to. Targets can be dynamically added or removed without interrupting the workflow. Health checks can be configured on a group-level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oeDZoEfR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fnchr4fq0ux7neglmwcg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oeDZoEfR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fnchr4fq0ux7neglmwcg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Balancer&lt;/strong&gt;&lt;br&gt;
Working on the 4th OSI layer (Transport), this balancer is a high performance option for TCP and UDP connections.&lt;br&gt;
The general workflow is similar to the application load balancer.  Listeners can be configured to check client requests and forward them to target groups, even across different availability zones. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8hHELnju--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yh9adqlag6pxbd907boj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8hHELnju--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yh9adqlag6pxbd907boj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Classic Load Balancer&lt;/strong&gt;&lt;br&gt;
Neither the fastest nor the fanciest choice. In the truth, it’s a legacy option, and one AWS advices you to move from, if possible. It supports some Layer-7 specific options, particularly x-forwarded-for header (which identifies the IP of the original client, rather than the Load Balancer’s) support, and sticky sessions. It also supports layer-4 load balancing.&lt;/p&gt;

&lt;p&gt;In a future post, we’ll go over the steps required to implement an Application Load Balancer, and how to maximize the advantages it can bring us. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS IAM &amp; Policies</title>
      <dc:creator>Mariano Ramborger</dc:creator>
      <pubDate>Thu, 29 Apr 2021 12:35:06 +0000</pubDate>
      <link>https://dev.to/marianoramborger/aws-iam-policies-3ldj</link>
      <guid>https://dev.to/marianoramborger/aws-iam-policies-3ldj</guid>
      <description>&lt;p&gt;&lt;strong&gt;IAM&lt;/strong&gt; stands for Identity Access Management, and it’s a service which helps you to manage the access to AWS resources in the account.&lt;/p&gt;

&lt;p&gt;Upon creating your AWS account, you’ll sign-in as a super-user (root), with no restrictions when it comes to creating or using AWS resources. However, just using this super-user for everything is not considered a good practice. Instead, it is recommended that you just use to create your very first user with admin rights, and only use the root user when it’s unavoidable.&lt;/p&gt;

&lt;p&gt;Fortunately IAM has a robust set of features that lets us configure fine-grained access to AWS services for users, groups of users, and even for AWS services themselves.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw5tujlkyja1sgn7grwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw5tujlkyja1sgn7grwc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So how does it work?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Simply put, a &lt;strong&gt;Policy&lt;/strong&gt; is a document that defines a set of permissions. It can allow or deny the access to specific AWS resources, and it can be attached to a user, group or role.&lt;br&gt;
Policies documents are on JSON format, and can be categorized in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed Policies – IAM policies created and managed by AWS. These policies cover some of the more frequent use-cases, like granting read-only access to a storage service unit. You can attach these policies to as many entities as you want. You cannot really modify Managed policies, but you can use them as a starting point to create new policies.&lt;/li&gt;
&lt;li&gt;Customer Policies – IAM policies created and managed by the user. You can assign them to as many entities as you want, but only within the account that created them.&lt;/li&gt;
&lt;li&gt;Inline Policies – IAM polices that are intrinsically embedded into a user. This type of policy can only be attached to the user, group, role or application in which is defined. What’s more, should you delete its bearer, the policy will also be deleted. This type of policy is useful when you really 
One important thing to note. An entity could be under the influence of two different policies that contradict themselves, regarding the use of a resource. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s say for example that policy A allows an entity to read the contents of all S3buckets. Policy B on the other hand, explicitly denies the access to a specific S3 bucket.&lt;br&gt;
If a user would be under the effect of both policies (for example, getting Policy A due to belonging to a user group, and B directly attached to the user), then the user will be able to access all S3 buckets except the one that it’s explicitly denied in policy B. &lt;/p&gt;

&lt;p&gt;This is pretty much a rule on AWS’s IAM. &lt;strong&gt;When a user is both allowed and explicitly restricted from a resource by different policies, the explicit restriction will always prevail&lt;/strong&gt;.&lt;br&gt;
So how do we create a Policy?&lt;br&gt;
We’ll start from the AWS console, search up “IAM” and then select Policies. After that we’ll select “Create Policy”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq4ovf80s0c5k79bttsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq4ovf80s0c5k79bttsf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two ways to create policies in the console. As we have already mentioned, policies are essentially JSON documents, so we can just write it up.&lt;br&gt;
Optionally we can use the console’s excellent policy maker:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhyygyo2x2ixb5eyk01d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhyygyo2x2ixb5eyk01d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can select:&lt;br&gt;
• Service: Services included in the policy. You can specify multiple services&lt;/p&gt;

&lt;p&gt;• Access Level: Here we can fine-tune the actions that the policy will be allowed (or denied!) regarding the services in question. In this example, the policy will only grant the ability to list the contents of s3 specific buckets.   &lt;/p&gt;

&lt;p&gt;• Resources: Here you can select which specific resources (or all of them) of the selected services to be covered by the policy. In this case, the policy will pertain to the use of a specific S3 bucket which I’ve created beforehand.&lt;/p&gt;

&lt;p&gt;• Request conditions: Here you can include an additional security layer, by asking the entity who requests the resource to comply with additional conditions.&lt;/p&gt;

&lt;p&gt;If we move to the JSON tab, we can see how the editor translates our instructions to a JSON document.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsex2gq261h64iy6oa51t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsex2gq261h64iy6oa51t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this step, we can add tags to make it easier to find our policy, and give it a name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd7l4k6bsmbw2cathm2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd7l4k6bsmbw2cathm2k.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we have our policy. Now we just need to attach it to an entity. How do we do it?&lt;br&gt;
We just need to select a user, in this case a DummyUser I created, and choose “Add Permission” &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhfcyl0dq79e3elkaxyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhfcyl0dq79e3elkaxyv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that we just need to select “Attach existing policies directly” and continue. With that, the policy is attached to our user, and from now on, this identity will be able to list the contents of the bucket we’ve specified.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g58851ettnknp760wzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g58851ettnknp760wzu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, the use of policies is not limited to users. In future posts we'll delve deeper into IAM's potential to configure fine-grained access to our AWS resources, bolstered by the use of groups and roles.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS - Dev associate: EC2 (Part 2).

</title>
      <dc:creator>Mariano Ramborger</dc:creator>
      <pubDate>Wed, 28 Apr 2021 00:19:20 +0000</pubDate>
      <link>https://dev.to/marianoramborger/aws-dev-associate-ec2-part-2-5d7j</link>
      <guid>https://dev.to/marianoramborger/aws-dev-associate-ec2-part-2-5d7j</guid>
      <description>&lt;p&gt;So in part 1 we got our EC2 instance up and running. Now it’s time to connect to it from our local machine.&lt;/p&gt;

&lt;p&gt;EC2 offers multiple ways to connect to an EC2 instance, but I think SSH provides the best experience. So let’s open up a console!&lt;/p&gt;

&lt;p&gt;Remember the key-pair we downloaded at the end of part one? &lt;br&gt;
I hope you still have it because we are going to use it.&lt;/p&gt;

&lt;p&gt;BUT! We won’t be able to use it if it’s publicly viewable.  So we’ll run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;400 &lt;span class="o"&gt;[&lt;/span&gt;keypair.pem]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which will set the permissions for that file so that only its owner can read it. Following that we’ll grab the instance public ip from the AWS console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hAD8I7em--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iojx9l4fqredee29gv04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hAD8I7em--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iojx9l4fqredee29gv04.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ll now SSH into the instance with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; “[keypair.pem]” &lt;span class="o"&gt;[&lt;/span&gt;our user]@[public IPv4 DNS] 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, since we have picked the default options, we’ll log with “ec2-user”, since this is the default owner of the AMI we have chosen. &lt;br&gt;
You will probably receive a message stating that the authenticity of the host could not be established, and receive a prompt to answer whether you still want to connect. Answer yes. &lt;br&gt;
With that, we are in. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Msfeh-F7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24ygb98i4dr2vhuhasz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Msfeh-F7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24ygb98i4dr2vhuhasz4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, Before doing anything else, we’ll run the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To escalate our privileges, so we don’t have to include sudo in the following commands. &lt;br&gt;
By now, the command line interface has probably prompted you to run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum update-y 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to update packages, so let's just do that.&lt;/p&gt;

&lt;p&gt;After it's done, let's install the apache package which will enable us to launch an apache server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;httpd –y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it's finished, let's start it with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl start httpd 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this Our server apache is up and running. But just for fun let’s add some content to it before checking it out. So we'll enter&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /var/www/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once inside, we’ll create an index.html file. I don’t want anything fancy so let just&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nano index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we'll write some very basic html into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
 &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;
   Hello Cloud! 
  &lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
 &lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now back on the AWS console we can grab our Instance’s ip and paste it on our browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bFNGmB16--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptfux2uredyagcd9vkaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bFNGmB16--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptfux2uredyagcd9vkaa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And with that, our web server is up and displaying our html code. While rather simple, this two-parter will be the basis of many future posts. If all of this seemed too easy, don’t worry: it’ll only become more complex from now on.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS -  Dev associate: EC2 (Part 1).</title>
      <dc:creator>Mariano Ramborger</dc:creator>
      <pubDate>Wed, 28 Apr 2021 00:18:06 +0000</pubDate>
      <link>https://dev.to/marianoramborger/aws-dev-associate-ec2-part-1-5flg</link>
      <guid>https://dev.to/marianoramborger/aws-dev-associate-ec2-part-1-5flg</guid>
      <description>&lt;p&gt;For the past few months I've been studying to take the AWS developer associate certification. &lt;br&gt;
I had a great time learning the many services offered by the platform, but as the date of the exam crept ever closer, I realized that my notes (almost 250 pages!) were really aching for some order.&lt;br&gt;
Of course I could just revise them. Trim the least-important bits. Highlight the key parts. Fight against the never-ending tide of permanently bolded text that reigns over the document from page 36 onwards.&lt;br&gt;
But maybe a better course of action would be to share what I have learnt. Perhaps the feeling of being observed will help me to avoid bolding and highlighting every single line of text. I guess time will tell.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So what is Amazon Web-Services?&lt;/strong&gt;&lt;br&gt;
Chances are if you are reading this post, you are aware of what AWS is. But for completion’s sake, AWS is Amazon’s cloud platform, offering over 200 on-demand services with affordable pay-as-you-go pricing. These services include (but are not limited to): computing, storage, databases, networking and data analytics.&lt;br&gt;
I choose to start this series with a computing service (EC2) because, despite Amazon’s excellent serverless technologies, EC2 is still one of the most comfortable and intuitive ways to run code on the cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elastic Cloud Computing&lt;/strong&gt;&lt;br&gt;
Elastic Cloud Computing (EC2) is a service that provides computing on the cloud, which can automatically shrink or grow on demand.&lt;br&gt;
It’s kind of a virtual machine hosted by AWS, that you can set-up in a matter of minutes. &lt;/p&gt;

&lt;p&gt;So we’ll do that!&lt;br&gt;
In this post we will launch an EC2 instance, and in part 2 we’ll use it to mount a webserver.&lt;/p&gt;

&lt;p&gt;We start by logging into the AWS console. AWS has an impressive free tier that lets us play with lots of its services without paying a single dime. &lt;/p&gt;

&lt;p&gt;We’ll go straight to EC2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B6CYROVd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttc4fylx3jgk5q9xomj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B6CYROVd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttc4fylx3jgk5q9xomj0.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will lead us into the EC2 dashboard, where we can create and manage our EC2 instances, as well as other features that are outside the scope of today’s post.&lt;br&gt;
From here we can select “Launch instance"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JL60Fqll--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zeybvycekdg6s94mr3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JL60Fqll--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zeybvycekdg6s94mr3w.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;&lt;br&gt;
Here which we can pick an AMI. An AMI, or Amazon Machine Image is just a template which contains software configuration required to launch an EC2 instance. &lt;br&gt;
Amazon offers a nice default selection, but you can also visit the AMI store, use Community AMIs or even create your own.&lt;/p&gt;

&lt;p&gt;We’ll select Amazon Linux 2 AMI, a lightweight, battle-tested AMI which is suitable for most of your demo and educational needs. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;&lt;br&gt;
In this section we have to choose an instance type, and we get our pick from a rather long list with varying levels of performance. Since this is a rather simple demonstration we’ll go with the t2.micro, and press next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;&lt;br&gt;
For step 3, we can leave all the defaults this time around, but do make sure that “Auto-Assign public IP” is set to “enabled”, since we’ll want to log onto this instance from our local computer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_QiXDFf8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltj961fakwuockkiqe7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_QiXDFf8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltj961fakwuockkiqe7b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt; &lt;br&gt;
Here we can choose Elastic Block Storages (EBS). EBS are scalable, highly-available volumes that can be attached to EC2 instances. They come in different varieties including:&lt;/p&gt;

&lt;p&gt;• The General Purpose SSD family (gp 2-3): Good price/performance balance. Great for development and operations where latency is not paramount.&lt;/p&gt;

&lt;p&gt;• The Provisioned IOPS SSD family (io 1-2): Higher durability and Input/Output operations per second than gps, albeit a bit more expensive.&lt;/p&gt;

&lt;p&gt;For your root SBS you’ll only be able to pick between the gp &amp;amp; io SSD families. However, on subsequent volumes, you can also pick HDD options:&lt;/p&gt;

&lt;p&gt;• Throughput Optimized HDD (st1): An HDD option which is pretty low-cost, and it great for huge amounts of data which is frequently accessed.&lt;/p&gt;

&lt;p&gt;• Cold HDD (sc1): A super low-cost option which is great for rarely-accessed data.&lt;/p&gt;

&lt;p&gt;The default gp2 will suffice for our example, so we’ll press next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;&lt;br&gt;
On Step 5 we can add Tags to our instance. Tags are key-value pairs that we can apply to our instances or EBSs, to make it easy to refer to them from other services. However, this time around we’ll add none and continue onto the next step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt;&lt;br&gt;
This step will let us configure a Security Groups, which pretty much acts as a virtual firewall, defining which ports will accept traffic. Since the aim of Part 2 is to use this instance as a web service, we’ll add port 80 to enable HTTP traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vz0buoMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flsav27blmz4pdeavfly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vz0buoMl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flsav27blmz4pdeavfly.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, we can review and launch our instance. But before the final push, we’ll have to configure a key-pair for this instance. This will allows us to connect to our newly-created instance via SSH. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jm36bOQ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1i0kdy4m23w64ih9tn1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jm36bOQ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1i0kdy4m23w64ih9tn1g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll have to download a .pem file containing the keys. Store it well because you &lt;strong&gt;won’t be able to download it again&lt;/strong&gt;.  After doing that, we’ll have our EC2 up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZBhOWnIJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/divpz9wp827joszr6fdf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZBhOWnIJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/divpz9wp827joszr6fdf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On part 2 we’ll use this instance to mount a very simple web-server.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
