<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamed Latfalla</title>
    <description>The latest articles on DEV Community by Mohamed Latfalla (@imohd23).</description>
    <link>https://dev.to/imohd23</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/imohd23"/>
    <language>en</language>
    <item>
      <title>How to handle asynchronies requests in AWS Part 1</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Wed, 19 Oct 2022 05:58:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-handle-asynchronies-requests-in-aws-part-1-3jam</link>
      <guid>https://dev.to/aws-builders/how-to-handle-asynchronies-requests-in-aws-part-1-3jam</guid>
      <description>&lt;p&gt;One of the most interesting architectures nowadays is event-driven. It consists of an event producer and action. So, if we imagined that I uploaded an excel sheet to S3, I want a set of actions to be performed agains the records that are in this file. Pretty simple, event-driven architecture will solve the issue. &lt;/p&gt;

&lt;p&gt;But, what makes this flow, somehow hard to maintain is that when it starts, it keeps moving. Another imagination is if you have 500k+ records in that excel file, and you had an unhandled error hits the record number 200k, what would you do? Do you reload the file and waste all computing power that processed 199,999 records? Sounds expensive and frustrating.&lt;/p&gt;

&lt;p&gt;Luckily, there are number of ways to handle this issue. Allow me to present “ a way” to solve this issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m6jv227H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mqw5m2ljfnjb65q5z88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m6jv227H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mqw5m2ljfnjb65q5z88.png" alt="Image description" width="696" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems to be simple and repetitive? Yes it is. &lt;/p&gt;

&lt;p&gt;Let’s assume you uploaded an excel file to S3, and you want all the rows to be cleaned from extra spaces, perform some data adjustment, and insert the records to a database.&lt;/p&gt;

&lt;p&gt;So let me walk you through the record lifecycle, and then why it was this way.&lt;/p&gt;

&lt;p&gt;1- File got uploaded, S3 events triggered Data Generator function. &lt;br&gt;
2- Data Generator function will pass all the records into Queue 1. After visibility timeout and delivery delay reached, Function 1 will start pulling batches from the queue. &lt;br&gt;
3- If a record had unhandled error, the whole batch will be sent to DLQ 1.Why is that? Because The function pulled a whole batch, not a single record. So, the batch will be still in the queue until Lambda tells the queue I’m done with it you can delete it. Otherwise, the batch will be deliver to Queue 2 and will be deleted from Queue 1 after Function 1 returns success to Queue 1.&lt;/p&gt;

&lt;p&gt;4- The same scenario will be performed with Queue2, Function 2 and DLQ2.&lt;/p&gt;

&lt;p&gt;Now let me tell you why this scenario is achieving the goal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1- Event-Driven:&lt;/strong&gt;&lt;br&gt;
As you can see, no action will be performed unless it satisfies the trigger requirements. Uploading the file to S3 is the flow generator. Without it, the whole flow will be idle. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2- Lower batch to retry:&lt;/strong&gt;&lt;br&gt;
SQS FIFO (First In - First Out) can send batches that hold up to 300 messages per second or 3000 when it get sent by batches. This has some factors to it that shapes the batch size ( Data Generator passed messages interval, visibility timeout, and delivery delay). Having 300 messages to handle is more manageable than the whole file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3- DLQ is the saviour:&lt;/strong&gt;&lt;br&gt;
Because of DLQ, the unhandled records can be isolated and dealt with separately. Some records might have some special characters, different format or even empty strings. This can toast all the processed data if it was not held in a safe place (like cache or inserted into DB). Once the records reached DLQ, you have the freedom to do whatever you want to that batch of records. Like reprocess if, for example, you exceeded the concurrent limit of Lambdas, or you had the unhandled error.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4- Price wise? Justifiable:&lt;/strong&gt;&lt;br&gt;
When we see the price of 1 million requests (0.50$) it is nothing comparing with the benefits that we’re getting from. Because this is the price we pay to isolate the records that shows us clearly the gaps and code mistakes to handle our records. Without it, you will pay way more for Lambda with the reruns that are unnecessary and A LOT MORE for CloudWatch for put logs action. Which is by the way expensive if you dont know how to use it wisely. &lt;/p&gt;

&lt;p&gt;To conclude this short article, I highly encourage anyone who works with Serverless to invest more in this part of the architecture. Because the amount of money and time you invest to handle these small, yet expensive errors, is totally worth it. I will bring part 2 to show you how you can make it more dynamic with EventBridge. Until then, be safe.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>eventdriven</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How did I created EC2 by Voice?</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Wed, 08 Jun 2022 01:55:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-did-i-created-ec2-by-voice-2p54</link>
      <guid>https://dev.to/aws-builders/how-did-i-created-ec2-by-voice-2p54</guid>
      <description>&lt;p&gt;AWS is amazing, the services and the interconnection parts makes it fun and really enjoyable to work with, even for fun projects like this.&lt;br&gt;
Do you want to try it? &lt;a href="https://github.com/imohd23/voice-to-infra"&gt;Click here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, I was bored and decided: I want to launch EC2 with voice! Pretty simple. The diagram will show you how:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CxzSEzhN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0077zugbj3rm9o3aq39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CxzSEzhN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0077zugbj3rm9o3aq39.png" alt="Image description" width="880" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram shows the whole story, but let's describe it a bit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1- The record:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first step is to create a unified statement that you want your transcription job to turn into text, which will be processed by the function.&lt;br&gt;
For this example, I used a website to record my voice and to download it as mp3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2- Upload the file:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once I got the record, I uploaded it into S3 and let it work. I configured my bucket to send S3 payload into Lambda that I created to deal with what gets passed from this bucket action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ua_zipXg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vwgu2mhkknb3mdj53bin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ua_zipXg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vwgu2mhkknb3mdj53bin.png" alt="Image description" width="880" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, Lambda and Transcribe time come.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3- Lambda time:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the payload triggers the function, I get what I need from the payload to pass it into Transcribe. These are straight forward steps.&lt;/p&gt;

&lt;p&gt;I let it to identify the language because I didn't know the recorded voice will be with which accent. If I recorded it with my accent, I believe it will get a hard time understanding the Arabic accent of my English.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;transcribeJob = transcribe.start_transcription_job(
                TranscriptionJobName=transcribeJobName,
                Media={
                     'MediaFileUri': 's3://'+bucketName+'/'+fileName
                       },
                IdentifyLanguage= True
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, waiting for Transcribe to finish its job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4- Transcribe:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Transcribe is a fast service, despite that my record is short, the response you get is relatively fast.&lt;br&gt;
So, I get the job status and look for &lt;code&gt;TranscriptionJobStatus&lt;/code&gt;. Once I get &lt;code&gt;Completed&lt;/code&gt;, I get the link that holds the text for my job.&lt;/p&gt;

&lt;p&gt;This is an example from the console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6MPNtPKq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u28jsaix44bhy7zq4w1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6MPNtPKq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u28jsaix44bhy7zq4w1n.png" alt="Image description" width="880" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I do some basic verification over the words as I set the function to get some keywords like Create and Option to identify the action I want to do.&lt;/p&gt;

&lt;p&gt;The next step is to check the resource that I want to make. I hardcoded it as I did the minimal for this example. But, as always, the sky is the limit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5- Creation time:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once all the previous steps are done, I loop over the pre-config list that I made and get the parameters to pass it into the command. In my case here is creating an EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--86RDI8in--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/or70ss5g3xrw95xk7mgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--86RDI8in--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/or70ss5g3xrw95xk7mgo.png" alt="Image description" width="880" height="30"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here you go, an instance got created because of a voice record.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To conclude:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are many things you can do with event-driven architecture. This is a small, fun, waste of time for some, but great evidence that this kind of architecture is powerful. I did lambda execution via SMS before and it was fun. Check it out if you're interested in this kind of unusual actions with AWS Lambda.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Does AWS Lambda good for ETL jobs?</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Sat, 07 May 2022 02:04:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/does-aws-lambda-good-for-etl-jobs-10i8</link>
      <guid>https://dev.to/aws-builders/does-aws-lambda-good-for-etl-jobs-10i8</guid>
      <description>&lt;p&gt;ETL stands for: “Extract, Transform, Load”, which is the process of dealing with a series of data in a computing unit. &lt;br&gt;
Since AWS Lambda is a computing unit, cheap to use, and versatile with what you can achieve or combine with, makes it an appealing option for this kind of jobs. But does it?&lt;/p&gt;

&lt;h2&gt;
  
  
  1- What ETL Jobs need to finish successfully?
&lt;/h2&gt;

&lt;p&gt;Let’s assume that you have an excel sheet with a huge amount of rows along with a respectful number of columns, and you need to clean, transform and store these rows in a specific order or shape. You need to have certain components to achieve this task. &lt;/p&gt;

&lt;p&gt;AWS Lambda provides these sets by being able to connect with storage services like S3, Glacier. Also, you can use EventBridge to create a series of actions that fits with the custom schemas you’ve configured. DynamoDB gives the ability to use NoSQL option that provides a pretty fast DB connectivity. SQS for queueing the data into the function, and catch the unprocessed data for later adjustment in the DLQ. &lt;/p&gt;

&lt;p&gt;So, we can assure that the underlying architecture and connectivity is there for Lambda to do the job. &lt;/p&gt;

&lt;h2&gt;
  
  
  2- Only Lambda can do the job?
&lt;/h2&gt;

&lt;p&gt;Of course NO. AWS has another service that called Glue, which is a Data Integration service that built to do these type of operation. It has its own set of features and options. But it cannot be compared to Lambda when it comes to the connectivity. Lambda has more options. Yes Lambda is a computing unit and Glue declared as Data Integration service. But, what we’re doing here is to compare the use case itself, not the limitations.&lt;/p&gt;

&lt;p&gt;What can be done in Lambda, can be done in any EC2 family. You just need to find the right family to do the job. But, keep in mind that EC2 is a customer managed service, you have to take care of.&lt;br&gt;
One small comment here, if your ETL is consuming a lot of CPU, do not use T family.&lt;/p&gt;

&lt;p&gt;Another point I want to clarify that with what Lambda can utalize, it can recover the job if an issue comes to the code like unexpected data type like a difference in datetime format. Lambda can redirect these rows into DLQ which can work as a net that catch the uncommon data rows. This connectivity isn’t presented clearly in other services that might made you rerun the whole file after fixing the code manually. Trust me I’ve been there. &lt;/p&gt;

&lt;h2&gt;
  
  
  3- Advantages vs Disadvantages:
&lt;/h2&gt;

&lt;p&gt;Let’s start with with the disadvantages:&lt;/p&gt;

&lt;p&gt;Time: you cannot run the job normally for more than 15 minutes. It needs more than that to split the file, run many functions and maintain the passed data integrity.&lt;br&gt;
Power: it is very limited to what you can allocate, it is hard to know the right amount of memory to allocate which comes to CPU allocation.&lt;br&gt;
Misconfiguration: since its a computing unit, your code might be the misconfiguration. Using the wrong function in your code could lead to many issues which affects the allocation of memory and create bottleneck. &lt;/p&gt;

&lt;p&gt;Let’s lift the spirit a bit: &lt;/p&gt;

&lt;p&gt;Less configuration: Since its a managed service, you don’t need to think about the availability of service, or if any other process will use the computing power that you need for your job. It’s all taken care of.&lt;br&gt;
Ready runtimes: If you need to use python, just select it and all needed tools, services are ready to be used. As simple as that. &lt;br&gt;
sidekick services: You can use a lot of services to help you achieve the goal easily. Thanks to the SDK. &lt;/p&gt;

&lt;h2&gt;
  
  
  So, the question is does it work for ETL?
&lt;/h2&gt;

&lt;p&gt;The answer is: &lt;strong&gt;it depends&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;It depends on what are you doing, what are the expected outcome and how are you utalizing it. It can be very costly if you’re using a lot of surrounding services. Or your code it taking a long time to run. All these factors can influence your decision.&lt;/p&gt;

&lt;p&gt;There are no perfect services for ETL. There are services can do the job. Do you know what kind of job is needed to accomplished? Can you picture the process? Then you can answer this question. &lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>programming</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Create Lambda Layers in AWS Lambda</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Tue, 28 Dec 2021 22:51:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-lambda-layers-in-aws-lambda-5h1g</link>
      <guid>https://dev.to/aws-builders/create-lambda-layers-in-aws-lambda-5h1g</guid>
      <description>&lt;p&gt;For a while, I struggled when it comes to make layers for my functions. I used to download them locally, zip them, upload into S3 when their sizes are big and then create a version. This process takes a long time and the chances you make a defected layer is high when you're using Windows or Mac because of some layers that compile binaries when you download them.&lt;/p&gt;

&lt;p&gt;I wrote an article back in 2019 on how to do that with the help of Docker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/r/?url=https%3A%2F%2Faws.plainenglish.io%2Fhow-to-compile-resources-for-aws-lambda-f46fadc03290" rel="noopener noreferrer"&gt;How to compile resources for AWS Lambda&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since then, I still gets reads for it. Which triggers the need to create a new simple and extremely fast way to use this amazing feature, Lambda Layers.&lt;/p&gt;

&lt;p&gt;I will walk you through what it does and how you can have it in your account. Please note that this process is only (currently) for Python3.8.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is does?
&lt;/h2&gt;

&lt;p&gt;This script consists of 3 main steps: Create a new layer, Update existing one and read what's inside your latest layer version.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a new layer:
&lt;/h3&gt;

&lt;p&gt;Because of the struggle I mentioned at first, this process is time consuming when it comes to the manual, traditional way. So, with providing some key information, the script will create the directory structure, install the libraries with PIP, calculate the directly size to prevent exceeding layer limit, zip it, upload it into newly created S3 bucket (or existing if you have one), and finally, publishing the new layer.&lt;/p&gt;

&lt;p&gt;these steps are the minimal that you can do to create a new layer. Please note that some values are hardcoded, it can be easily made dynamic but its out of the scope (currently).&lt;/p&gt;

&lt;h3&gt;
  
  
  Update existing layer:
&lt;/h3&gt;

&lt;p&gt;Because managing an existing layer could be hard, as you will need to do many steps to maintain the existing libraries and add new ones. This script will get the referenced zip file of the latest version of your layer, download it, add to it, upload it into S3 again, and publish a new layer version. &lt;/p&gt;

&lt;h3&gt;
  
  
  Read Layer content:
&lt;/h3&gt;

&lt;p&gt;Because of the problem that log4j made in the couple of weeks ago, and what threat can existing resources made if they were got affected, you will need to monitor your resources and update them accordingly. This action will get the latest layer version zip file, extract it, use PIP to check what's inside and which version you have. This could also help maintaining supported libraries versions.&lt;/p&gt;

&lt;p&gt;Enough talking, lets dive into it:&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparation and execution:
&lt;/h2&gt;

&lt;p&gt;we will go though the steps in order:&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 steps(if you don't have one already):
&lt;/h3&gt;

&lt;p&gt;1- Login into your AWS account and go to S3.&lt;/p&gt;

&lt;p&gt;2- Create a new S3 bucket, keep it in the same region you work in.&lt;/p&gt;

&lt;p&gt;3- Set it up as you wish, no red lines are here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmz8t97xsm8sz618qmkd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmz8t97xsm8sz618qmkd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda steps:
&lt;/h3&gt;

&lt;p&gt;1- Go to lambda console and create a new function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp32ljvl1f0amegi22yl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp32ljvl1f0amegi22yl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Open the function -&amp;gt; Click on Configuration -&amp;gt; Click on Permissions -&amp;gt; click on Role Name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbejg48ji2zcih2rfbe3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbejg48ji2zcih2rfbe3l.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Click on Policy Name -&amp;gt; Edit policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4avurwsr8ph78rpn035g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4avurwsr8ph78rpn035g.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Paste this policy (edit resource as you wish) -&amp;gt; save it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "lambda:ListFunctions",
                "lambda:ListLayerVersions",
                "lambda:ListLayers"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "lambda:GetLayerVersion",
                "lambda:DeleteLayerVersion",
                "lambda:AddLayerVersionPermission"
            ],
            "*"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "logs:CreateLogStream",
                "lambda:PublishLayerVersion",
                "s3:ListBucket",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5- Go to GitHub and get the code -&amp;gt; copy the whole code in &lt;code&gt;code.py&lt;/code&gt; and paste it into your function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fimohd23%2Flambda_layer_maker_python3.8" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh69yjajqpwt239lvt5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh69yjajqpwt239lvt5e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6- Click on Test and paste this json (all fields are required)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "layer_name": "LAYER NAME",
  "s3_bucket": "YOUR S3 BUCKET",
  "libraries": ["LIBRARIES"],
  "action": "create_new"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE: in action key, there are 3 valid values: create_new, update, read_only&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;7- Test the function is what triggers it, give it a try.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsizudqprke6cnpuyaqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsizudqprke6cnpuyaqg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are a proof on this script executions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgi0anu5kpynyyi1cywxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgi0anu5kpynyyi1cywxl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to adjust the code as you wish, and let me know if you have any issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;This is a way of doing this procedure, I'm pretty sure you have your own way too. let me know, does it worth it? Do you know how to code in NodeJs or Java and wish to see this script takes another turn and provide another languages support? I'll be thrilled if that happened.&lt;/p&gt;

&lt;p&gt;Stay Safe.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>automation</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Secure Your AWS Serverless Application?</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Thu, 08 Jul 2021 05:15:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-secure-your-aws-serverless-application-5h03</link>
      <guid>https://dev.to/aws-builders/how-to-secure-your-aws-serverless-application-5h03</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffei64lnui76xp81c1ql5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffei64lnui76xp81c1ql5.jpg" alt="security"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security is no joke, one of the key things you have to visit whenever you’re architecting a new serverless application.&lt;/p&gt;

&lt;p&gt;Serverless application is basically an idea, like any other application, that you have and it has a logic. You need a computing power to execute your logic, an object storage in case you need it, a place to store your information and a way to tell your logic: go go go!&lt;/p&gt;

&lt;p&gt;Pretty simple!&lt;/p&gt;

&lt;p&gt;The beauty of Serverless is that we don’t care where and why our logic got stored the way it does, although Information Security teams would get really mad if they read this statement. Anyway, what really matters for us as a cloud developers, as I like to name ourself, is that our logic is getting executed, customers are happy, win-win situation!&lt;br&gt;
But, I wish it was that way. We all need to care about our own applications security. Some people would say: Security!! Hack! We got exposed! Well, sometimes these things happen. But, you as a cloud developer, can help preventing this from happening. &lt;/p&gt;

&lt;p&gt;Let’s dive into it together shall we?&lt;/p&gt;

&lt;h2&gt;
  
  
  Standard Services:
&lt;/h2&gt;

&lt;p&gt;Most basic Serverless applications would need the following services:&lt;/p&gt;

&lt;p&gt;1- Execution endpoint -&amp;gt; API GW&lt;/p&gt;

&lt;p&gt;2- Object storage -&amp;gt; S3&lt;/p&gt;

&lt;p&gt;3- Computing unit -&amp;gt; Lambda&lt;/p&gt;

&lt;p&gt;4- Records storage -&amp;gt; DynamoDB&lt;/p&gt;

&lt;p&gt;Of course you can have more than this, but let’s keep focusing on the main parts that shape the basic Serverless application.&lt;/p&gt;

&lt;p&gt;All these services are managed by AWS, in case of batching, updating or deprecating runtime, scaling services based on our usage. All the major tasks are handled by them. Which is the soul of this architecture, focusing on developing and leaving operation tasks for them. Which I’m really fine with. But they can’t do the job without us, the cloud developers.&lt;/p&gt;

&lt;p&gt;We have to care about our applications. We must secure it, which in fact, is pretty simple.&lt;/p&gt;

&lt;p&gt;Securing your Serverless application starts at the beginning, best practice.&lt;/p&gt;

&lt;p&gt;Wait Mohamed, are you telling me that if I followed the best practice per service, that would secure my application? YESSS.&lt;/p&gt;

&lt;p&gt;Let’s take a closer look on some of these steps per service:&lt;/p&gt;

&lt;h3&gt;
  
  
  API Gateway (API GW):
&lt;/h3&gt;

&lt;h4&gt;
  
  
  A- Use WAF:
&lt;/h4&gt;

&lt;p&gt;AWS provides a service called WAF, which is a Web Application Firewall. Self explanatory name that tells you why to use this service. It provides a protection against harmful usage of your resources that might affect the availability of your application and consume its resources, which will lead to interrupting the usability of the application .&lt;/p&gt;

&lt;p&gt;Use WAF to monitor your API GW executions and act based on specific metrics. Using pre-defined roles would save you time planning and testing techniques to prevent common online threats.&lt;/p&gt;

&lt;h4&gt;
  
  
  B- Configure Endpoint Throttling:
&lt;/h4&gt;

&lt;p&gt;You know your application and your customers, if an endpoint is getting unusual executions, this means that there is something wrong. Setting up throttling threshold would stop the exhaustion of your resources, especially if this endpoint is executing Lambda, which will visit in a minute.&lt;/p&gt;

&lt;h4&gt;
  
  
  C- Validate Your API GW Execution:
&lt;/h4&gt;

&lt;p&gt;One of the major risks in Serverless applications is broken-authentication. If your endpoints are being called and consumed by unauthorized actor, this means you’re in a big trouble.&lt;/p&gt;

&lt;p&gt;Use services like AWS Cognito to authenticate the executer. In this case, you would have a better understanding about your customers and to insure that no one will execute a function that he’s not supposed to.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  A- Block Public Access (in some cases):
&lt;/h4&gt;

&lt;p&gt;If you’re hosting sensitive data like identity cards, passports, secret reports, you have to block public access to this bucket. This will prevent any attempts to expose your files.&lt;/p&gt;

&lt;p&gt;But, this recommendation won’t work if you’re hosting publicly available data in your bucket as it needs public access. Yes, there are some other options to share these files publicly while it’s publicly blocked. But, why the hassle? If you reached this point, you need a help on your S3 strategy.&lt;/p&gt;

&lt;h4&gt;
  
  
  B- Encrypt At-Rest And On-Transit Files:
&lt;/h4&gt;

&lt;p&gt;The main 3 operations against S3 are: putting object, store it and eventually retreive it. So, to do these 3 operations securly, consider the following:&lt;/p&gt;

&lt;p&gt;To upload securely into S3, use presigned url for that. The reason for it is that you, as a cloud developer, allow PUT action into a specific bucket for a specific file. In this way, you give the API an authority just to do a single action on a single file. Less privilege less concerns.&lt;/p&gt;

&lt;p&gt;To store the file, you have to enable server-side encryption. This recommendation comes from AWS themself as they advice this move to enhance the security level and privacy. Also, it comes with no additional cost. Cool right?&lt;/p&gt;

&lt;p&gt;To share the file, consider the presigned url, the same concept as upload. When you generate a presigned link for sharing a file, it gives you a url that consist of object path and an access token that is valid to retrieve only that specific file for a period of time. Once it gets expired, it is unusable.&lt;/p&gt;

&lt;h4&gt;
  
  
  C- IAM Role Access:
&lt;/h4&gt;

&lt;p&gt;Creating an IAM role is really one of the best “best practice” actions that you as a cloud developer needs to enhance. For S3 case, creating one with specific actions would limit the level of S3 bucket/content exposure.&lt;/p&gt;

&lt;p&gt;Create a role that has the specific action on the specific bucket. Why? Because this will make sure, even in case of compromised access, that this access won’t cause a great harm on your data.&lt;/p&gt;

&lt;h4&gt;
  
  
  D- Bucket Policy:
&lt;/h4&gt;

&lt;p&gt;Bucket policy is a list of rules that guards the bucket from misused intentionally or unintentionally. I had a hard time understanding this concept and some examples need articles to describe but lets summaries it in the next example:&lt;/p&gt;

&lt;p&gt;You have a bucket that can accept files generated only from any resources that is deployed in “Core-Production-VPC”. Usually we guard our bucket with an IAM role, but in this case, it’s not the suitable thing as anyone who can have an IAM role allowing actions against this bucket will be able to perform actions against it. So, we create an S3 bucket policy that can accept PutObject only from “Core-Production-VPC”.&lt;/p&gt;

&lt;p&gt;Who will win? IAM role or Bucket Policy in this case? Well, it’s a combination of both. If your IAM role has an access to this bucket, the call to this bucket will pass through. But, if you don’t fulfilled policy requirements, then sorry, you can’t PutObject in it.&lt;/p&gt;

&lt;p&gt;Really cool way to make sure you don’t have misconfiguration or misuse from IAM roles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lambda:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  A- One IAM Role Per Function:
&lt;/h4&gt;

&lt;p&gt;Lambda has an access to many services and when you use SDK, oh man, you have them all! So, should you be happy? NO, because the risks are countless.&lt;/p&gt;

&lt;p&gt;So, let’s focus on this as its really important point. Less privilege less concerns. AWS advice that you limit your function with a unique, sophisticated IAM role privileges. In this case, you can be sure that this function won’t cause unexpected harm as its boundary is well defined and known. One of the new security risks is Event Injection. Did you test your function well? Think twice.&lt;/p&gt;

&lt;h4&gt;
  
  
  B- Temporary Credentials For AWS Access:
&lt;/h4&gt;

&lt;p&gt;As we said in the previous point, Lambda can access almost all your AWS resources. Tell this information to your Information Security team and run away! So, How to set its access boundaries ? In API GW section, we talked about authorization, and yes it applies here. You have an application that one of your customers stores data into DynamoDB. You generate a short living token to be used from this customer with the specific target. Once the time comes, the access is revoked. Simple and affective.&lt;/p&gt;

&lt;h4&gt;
  
  
  C- Limit Execution Duration
&lt;/h4&gt;

&lt;p&gt;When it comes to execution duration, it comes to what you actually pay AWS. As Lambda is pay-as-you-use, you need to know what you’re paying for.&lt;/p&gt;

&lt;p&gt;One of the concerns is that if your functions are under DDoS attack (which is not good), your function will be running all the time. This has a financial and usability impacts. But, to reduce the affect from these kind of attacks is to limit execution duration to what your tests show. Test the function, know how long usually needs, and set it as a hard limit for your team. Function will timeout and will free up the resources. But, this would make a bigger attack rates? Not necessarily as you might face a small amount of load, which can reduce the amount of affected customer. How?&lt;/p&gt;

&lt;p&gt;Ok, if you’re under attack, your resources are bing executed in a high rate calls, which could make them busy. To avoid that, to a certain rate, limit the execution. This will result to quickly release them and free up the place for “some” customers calls. It seems not to be a great option but if the attacker reached the functionality level, you don’t have that much of options. Do you?&lt;/p&gt;

&lt;h4&gt;
  
  
  D- Coding Best Practice:
&lt;/h4&gt;

&lt;p&gt;Your coding practices can have a great impact on your security. How? Let’s find out.&lt;/p&gt;

&lt;p&gt;Let’s assume you have a wrong IAM role attached to your function, and you’re using a 3rd party library that has vulnerabilities, guess what can happen? Always choose a trusted 3rd party libraries.&lt;/p&gt;

&lt;p&gt;If you have a gigantic function that had an enormous payload. The passed payload has an event injection script and you didn’t validate it. Man you’re doomed. Always validate your payload.&lt;/p&gt;

&lt;p&gt;I can keep writing about this section forever. But I believe the message received. Best practice can save you from threats.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  A- Encrypt At-Rest Data:
&lt;/h4&gt;

&lt;p&gt;So, when you have an important data that you need to store in your table, like sensitive information about the clients, or your customer business secrets, you defiantly need to encrypt them in your table. The reason is to protect these data from unauthorized access via your API or even from your developers! Yes, sometimes the risk can come from inside.&lt;/p&gt;

&lt;p&gt;How to store your keys? Well, you have many options, like System Manager, Secret Manager or in your lambda environment variables.&lt;/p&gt;

&lt;h4&gt;
  
  
  B- IAM Roles to Access DynamoDB:
&lt;/h4&gt;

&lt;p&gt;This point is not new, you have to lock the access into your tables with IAM role. In this case, you, as we described earlier, can insure no function will do anything that is not supposed to do, which will decrease the surface of potential harm from your functions that could be a victim to any Event Injection attack.&lt;/p&gt;

&lt;h4&gt;
  
  
  C- VPC endpoint to access DynamoDB:
&lt;/h4&gt;

&lt;p&gt;If you have functions that get deployed within a specific VPC and only these are being able to access a specific table, you can setup a policy that allows the access from only the wanted VPC. Cool! This also will decrease the attack surface as if one of your functions that belongs to another source got compromised, you’re good to some extend as not everything in DynamoDB will be exposed to the attacker.&lt;/p&gt;

&lt;p&gt;Summary&lt;/p&gt;

&lt;p&gt;So, to summarize, following these steps will enhance your Serverless application security as you cover most of the main points. As you can see, these are really not hard steps. It has a lot of documentation behind it and it will enhance the team standards. It is a win-win situation.&lt;/p&gt;

&lt;p&gt;Stay Safe!&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>serverless</category>
      <category>aws</category>
      <category>codequality</category>
    </item>
    <item>
      <title>How did I process half a million transactions in AWS Lambda within minutes?</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Wed, 23 Jun 2021 06:45:50 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-did-i-processed-half-a-million-transactions-in-aws-lambda-within-minutes-ch2</link>
      <guid>https://dev.to/aws-builders/how-did-i-processed-half-a-million-transactions-in-aws-lambda-within-minutes-ch2</guid>
      <description>&lt;p&gt;Processing data could be an intensive task, especially on the computing units as the read and write operations takes a lot of resources. Luckily, if you have the right tools, you can achieve it easily. But, is it worthy? Let’s find out.&lt;/p&gt;

&lt;p&gt;In this article I will share my experience with you on how did I achieved that. It is really simple and complicated at the same time. Why? Because of how Lambda works and what you have to think of when you “Code” because that really makes a difference.&lt;/p&gt;

&lt;p&gt;Why did I thought about it?&lt;/p&gt;

&lt;p&gt;Few years ago, my manager told me to think about a processing architecture to process BIG volumes of records but not that heavy operations. Like 800k rows of data, with 16 columns, that the amount of work needs to be done over each row isn’t complicated. So, Event-Driven Architecture!!&lt;/p&gt;

&lt;p&gt;I went through a lot of issues on how to deal with the limited resources in Lambda and how to deal with my records dropping because of timing out and OS errors. S3 was another story also to learn how to tune it for my use-case. A dear friend who works as senior consultant in AWS Bahrain helped me getting some tools in place to achieve this promising idea. It was one of the best experiences I ever got dealing with AWS resources.&lt;/p&gt;

&lt;p&gt;Enough talking lets get some diagram in place..&lt;/p&gt;

&lt;p&gt;Solution Digram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup13okhe5haad4opreit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup13okhe5haad4opreit.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Diagram looks scary? Trust me, it isn’t.&lt;/p&gt;

&lt;p&gt;Let me break it down for you in some steps:&lt;/p&gt;

&lt;p&gt;1- Initiate the process:&lt;/p&gt;

&lt;p&gt;Because I’m adopting Serverless architecture, it means Event-Driven events that if something happened, things will act based on it and the results will trigger another actions, till the end of the process.&lt;/p&gt;

&lt;p&gt;So, in our case is S3 Put request. When we upload the file into S3, it will put the file inside a bucket and when the file fully uploaded into it, Lambda will get triggered with S3 payload. Our first step just got finished. Whats next?&lt;/p&gt;

&lt;p&gt;2- Clean data:&lt;/p&gt;

&lt;p&gt;Because we got a csv file, some columns and rows could contain spaces, some special characters that MIGHT break your code. So, clean it up.&lt;/p&gt;

&lt;p&gt;Cleaning these records will prepare it to be ready for insertion. But, since we have a lot of records and the function might fail, how to track what was added and what left over?&lt;/p&gt;

&lt;p&gt;3- Add clean data into a queue:&lt;/p&gt;

&lt;p&gt;We will add the cleaned records into a queue. The reason is to track what has been added and what did not got added. Basically, SQS will act as an organizer. It will send small batches into Lambda, Lambda will add it into DynamoDB, then it will return success message to SQS to remove it from the queue.&lt;/p&gt;

&lt;p&gt;What will happen in case of failed record is SQS will retry 3 times, as per my configuration, to try insert operation. If these 3 tries failed, then it will move it into Dead Letter Queue (DLQ) which is another SQS queue that have the failed records. Then, you can debug why these records never make it into DynamoDB and can be processed again or even reject it.&lt;/p&gt;

&lt;p&gt;4- DynamoDB:&lt;/p&gt;

&lt;p&gt;Because we are trying to process massive chunks of data, we need some sort of Database that can handle the extreme load or records. DynamoDB solves this issue. There were a lot of experiments on how to handle the number of records and how to behave with the limited read/write throughput as write can handle 1kb data per unit. So, DynamoDB on-demand solve the issue.&lt;/p&gt;

&lt;p&gt;As per AWS documentation, using on-demand DynamoDB throughput is the option when you cannot predict your workload. Because it will prepare the max throughput just in case its needed.&lt;/p&gt;

&lt;p&gt;We moved the records from csv to DynamoDB, then what?&lt;/p&gt;

&lt;p&gt;5- Stream records to SQS:&lt;/p&gt;

&lt;p&gt;DynamoDB is really good event executer for Lambda. When you enable Stream, you can specify a Lambda function that react to your passed payloads from it. The good thing is you need to act based on type of record. We are dealing now with the newly added records. So, when we verify the tag, we add the record into another SQS.&lt;/p&gt;

&lt;p&gt;The reason for this queue is the consistency of records delivery. You are adding the record once, capture it and add it into the queue so you can deal with it. If not, then you have to scan the table to get unprocessed records and process them. Why the hassle? Let the queue deal with it.&lt;/p&gt;

&lt;p&gt;6- Process data:&lt;/p&gt;

&lt;p&gt;We reached the latest stages of record lifecycle in this architecture. When it reaches the Process Queue, it pass the records in batches, process them, and then pass them to another queue. As I clarified earlier, for consistency purposes.&lt;/p&gt;

&lt;p&gt;7- Update the processed record:&lt;/p&gt;

&lt;p&gt;Finally, the record will be grabbed from the Finished queue and passed into the Lambda function that will update the record with the processed information. If records failed to be passed, DLQ will gather them for your further debugging and actions.&lt;/p&gt;

&lt;p&gt;challenges:&lt;/p&gt;

&lt;p&gt;These points seems to be a straight-forward scenario, BUT, its not. Let me walk you through some problems and how it got solved.&lt;/p&gt;

&lt;p&gt;1- Lambda Lambda Lambda:&lt;/p&gt;

&lt;p&gt;Lambda is the key player here, we have limited time to execute the logic in your code. How can you insure the records were red from the file, cleaned, and added into the queue? Its hard but what you will need is speed. I code in Python and I used Multiprocessing library to speed things up. I used Multiprocessing Process function to use every single possible processing unit in Lambda. This action made my process to clean (in some tests) 558k transaction in 1:30 min! It was really fast. Again, it is not straight-forward scenario. Lambda can handle around 500 process when you allocate the max memory. Any other process will raise “OS Error 38: too many files open”. Why did I faced this issue? Because I joined all running processes and it was not closing finished processes. So, I run half the batch and loop over the running processes. If it finished, I force it to close. Problem solved…&lt;/p&gt;

&lt;p&gt;2- Keep an eye on CloudWatch:&lt;/p&gt;

&lt;p&gt;I made a big mistake by passing event variable into CloudWatch even when I was running the big batches. This resulted to write 6.6TB of data because of my many tests. The price of CloudWatch put log action is expensive so use it wisely.I made a big mistake by passing event variable into CloudWatch even when I was running the big batches. This resulted to write 6.6TB of data because of my many tests. The price of CloudWatch put log action is expensive so use it wisely.&lt;/p&gt;

&lt;p&gt;3- DynamoDB on-demand is the keyword:&lt;/p&gt;

&lt;p&gt;I started my configuration with making my DynamoDB provisioned, which is something for predicted workload. I made it 5 read and write throughput and that was an issue. I faced the issue that out of 558k records, only 1k was inserted into my table. I raised it into 100 throughput and still at least 60% out of the file get lost and not added! Then I reread the documentation and noticed my issue, DynamoDB on-demand is the solution for unpredicted load. I added all the 558k+ records within 5 min! Pretty FAST!&lt;/p&gt;

&lt;p&gt;4- SQS can be tricky:&lt;/p&gt;

&lt;p&gt;SQS is a great service and has a lot options and opportunities. But, you need to know what is the size of the batch you’re passing every time and what is the predicted time for your batch to finish. The reason for that is when you tell SQS to wait x seconds before making this batch available again, it might get processed multiple times. Know your code and your data, test test test and then configure it for heavy workload.&lt;/p&gt;

&lt;p&gt;These points were my top concerns, S3 was interesting but not that complicated as I expected. But the main question, does it worth it?&lt;/p&gt;

&lt;p&gt;Everything in this life depends on conditions, if you don’t want to manage instances or you want it with the minimal effort then yes, this scenario is valid for you. Keep in mind that debugging these use-cases can be tense because of how connected and how one mistake in one step can affect the coming steps.&lt;/p&gt;

&lt;p&gt;Stay Safe.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>How to deploy a full environment for WordPress site via docker by using Terraform?</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Mon, 28 Dec 2020 19:52:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-deploy-a-full-environment-for-wordpress-site-via-docker-by-using-terraform-3pk3</link>
      <guid>https://dev.to/aws-builders/how-to-deploy-a-full-environment-for-wordpress-site-via-docker-by-using-terraform-3pk3</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fthh3ye42aaae2oqqokm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fthh3ye42aaae2oqqokm7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For many users, preparing an environment in AWS to host their website/system can be tough, especially if they’re new to Cloud and what does actually that means.&lt;/p&gt;

&lt;p&gt;Also, after the preparation is done, the creation of resources might take a while too! But, what if you can do the following within 20 min window? Check the list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a VPC and all essentials for public and private subnets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an EFS and prepare an access point to this storage option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a launch template to replicate your configuration into multiple EC2s in different subnets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prepare your website as Docker image to have consistent replica of your main concern (your website and its DB).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It feels wired right? Actually no, because this is the beauty of IAC and Terraform. Why don’t we go deeper? Prepare your swimming suit.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: This is a POC script and got tested, yes it can be better but for the article purpose, I guess this is a good starting point.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What you will get?
&lt;/h2&gt;

&lt;p&gt;Check the image below, do not get tricked yet, wait for the clarifications:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpu612plzpu2ct9gpaj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpu612plzpu2ct9gpaj3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you run the code “ as it is”, you will get a load balancer placed in one of the public subnets, one EC2 machine that got deployed by Auto Scaling Launch Template and EFS.&lt;/p&gt;

&lt;p&gt;These things are enough to run your website, at least for this use case that I am talking about.&lt;/p&gt;

&lt;p&gt;But what actually happened when I run the script is the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;For VPC, the script will make a VPC, subnets (public and private), routing tables and internet gateway.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Auto Scaling, the script will make a launching template with what is needed to replicate the configuration into multiple instances as it required and will make sure that the minimum required instances are always meet the configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Load Balancer, the script will prepare an endpoint associated with health check metrics.&lt;br&gt;
For EC2, the script will launch an instance from the configuration script that got attached to the Auto Scaling launching template, this will install and setup Docker inside the machine and will pull the project from GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For EFS, the script will create all needed resources like security group, access point and access point attachment for this storage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is that too much? I guess not, some other systems needs a lot more. But, let us agree on one thing, doing all these things in 20 min manually is exhausting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The steps of deployment:
&lt;/h2&gt;

&lt;p&gt;It is really easy and needs few clicks, let us start shall we?&lt;/p&gt;

&lt;p&gt;Before we start, clone this &lt;a href="https://github.com/imohd23/Terraws-script" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1- Do you have Terraform installed?
&lt;/h3&gt;

&lt;p&gt;First thing first, check if you have Terraform installed in your machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frvl9li1uzls2gnc9btqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frvl9li1uzls2gnc9btqb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2- Do you have your AWS credentials?
&lt;/h3&gt;

&lt;p&gt;Because we are going to deploy these instruction to AWS, make sure one of these two options are available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS CLI is configured, So you should only know which profile will use in case of multiple configurations are placed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS access and secret keys, because of course we need credentials to access.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: Check &lt;code&gt;provider.tf&lt;/code&gt; once you decide which connection to use.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3- Some necessary commands.
&lt;/h3&gt;

&lt;p&gt;Open a terminal window and navigate into the project.&lt;/p&gt;

&lt;p&gt;Because we are going to make a new instances and we might need to access the instance, so run these commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -f $PWD/id_rsa 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE: Do not change the key name unless you go to &lt;code&gt;vars.ft&lt;/code&gt; and change it there also.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then, run this command to get all needed modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw1qejt8bpgsafkujudmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw1qejt8bpgsafkujudmj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4- Trigger the script.
&lt;/h3&gt;

&lt;p&gt;There is only one step left, maybe two if you want to check what will happen.&lt;/p&gt;

&lt;p&gt;To check what are the resources that will be created, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To bring all these planned resources to the life, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After all resources got alligned in the correct order, a message will ask you to approve these changes, type &lt;code&gt;yes&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgjnmxdex2hkgpmj63gs0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgjnmxdex2hkgpmj63gs0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now you can relax till the whole process is finished.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl64dibtxqv4tqkf3hvzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl64dibtxqv4tqkf3hvzi.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: After Terraform script is done, please wait 5–10 min before checking the website, the reason is that the infrastructure is done but the instance script is in progress. EFS mounting needs some time to be done.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To clean everything, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Important points:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Why EFS in this simple use case? Because I am using Docker to host website and DB all together, and you have Auto Scaling launch template, EFS should insure the consistency of your DB because the DB has a volume pointed into EFS. Any new instance will use EFS as volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At the end of the script execution, Terraform will return an ELB link and this will not work directly. Why?? Because Wordpress first initial page is going to be /admin to finish all needed configuration. So, go to &lt;code&gt;aws console -&amp;gt; EC2 -&amp;gt; your created&lt;/code&gt; instance and visit its public IP to finish the setup. Then all going to be good.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Summary:
&lt;/h2&gt;

&lt;p&gt;I have no idea if this precess would be helpful for anyone, but since I know how to make it, why not to share it?&lt;/p&gt;

&lt;p&gt;Creating a full basic environment in AWS gets funny turns, especially for new comers to this platform. This script will give them few important points to think about for now on, using Docker is really good option, using Terraform is not hard and finally, IAC is something they need to work on.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to use EFS with AWS Lambda?</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Mon, 22 Jun 2020 18:26:33 +0000</pubDate>
      <link>https://dev.to/imohd23/how-to-use-efs-with-aws-lambda-2057</link>
      <guid>https://dev.to/imohd23/how-to-use-efs-with-aws-lambda-2057</guid>
      <description>&lt;p&gt;AWS recently launched a new feature that lets the customer make an EFS (Elastic File System) that works with Lambda. This is really cool! But, Why EFS?&lt;/p&gt;

&lt;p&gt;EFS is a storage unit that lives independently inside a machine. This file can be attached to other services like EC2 and can be accessed from multiple instances at once. Files that are inside this storage unit is accessible from any connected instance or Lambda.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Why do we need something like this to work with Lambda? Extra complicated step?&lt;b&gt;&lt;/b&gt;&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Actually, this feature is amazing if you looked at it from different angles, let’s start with some of them:&lt;/p&gt;

&lt;p&gt;&lt;i&gt;1- Consistency:&lt;/i&gt;&lt;/p&gt;

&lt;p&gt;If you need multiple Lambdas to use (read and write) BIG files, you’ll need them to be in a place that doesn’t delay the function to get the resources, which leads to less computing power and time -&amp;gt; less money.&lt;/p&gt;

&lt;p&gt;&lt;i&gt;2- More space:&lt;/i&gt;&lt;/p&gt;

&lt;p&gt;When you’re working with files from S3, you’re limited to the max storage size of 512 MiB. Which is not enough is some cases. Plus, you might need to work with this file in different processes stages, like cleaning, segmenting, processing, and exporting readable reports/formats of this file. Imagine the amount of code involved in this scenario.&lt;/p&gt;

&lt;p&gt;&lt;i&gt;3- More space 2:&lt;/i&gt;&lt;/p&gt;

&lt;p&gt;Using layers will share the resources between the functions, but Layers sometimes can’t handle the size of the resources and binaries you called to run this function. Using EFS will give you more room to store these resources and call when is needed.&lt;/p&gt;

&lt;p&gt;I can list more points. But, you got the idea. Let’s dive into how to use it with lambda.&lt;/p&gt;

&lt;h2&gt;Creating EFS:&lt;/h2&gt;

&lt;p&gt;1- Open AWS console and search for “EFS”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sk01-yPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q46ue5o8cpo47vv04lml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sk01-yPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q46ue5o8cpo47vv04lml.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Click on “Create file system”.&lt;/p&gt;

&lt;p&gt;3- At step 1, select your VPC. If your lambda is configured within a VPC, choose it, if not, remember when VPC you’ve selected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RJrk48EV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iglb8ndvjps79dt9wufn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RJrk48EV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iglb8ndvjps79dt9wufn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Step 2, Add a descriptive name for your file system. Then click the next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1FR52e5T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/or1anxwxk2oivon8fg92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1FR52e5T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/or1anxwxk2oivon8fg92.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5- In step 3, go down and click on “Add access point”. Then fill it with what’s in the image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S2d_XUb7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qllnu91fpj1p7asjk1x6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S2d_XUb7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qllnu91fpj1p7asjk1x6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6- Review the configuration, then click “Create file system”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T9bhiHII--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jdychu1xxehhsuvyu2x3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T9bhiHII--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jdychu1xxehhsuvyu2x3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7- Done! wait for a few seconds and your EFS will be active.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pvjq2hAp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pf70meusvf7y1c7btj7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pvjq2hAp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pf70meusvf7y1c7btj7i.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Connect it with Lambda:&lt;/h2&gt;

&lt;p&gt;1- Click on Service and search for “Lambda”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0gRnkywq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u6at0ljuni0z9hilubrn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0gRnkywq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u6at0ljuni0z9hilubrn.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Create a new function and choose your preferred runtime language. In this article, I’ll use Python.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KZ2g6eBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qkna43zotpx65c6j5u19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KZ2g6eBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qkna43zotpx65c6j5u19.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Go down a little bit and you’ll see a section called “VPC”, click on “Edit”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eywUWTU7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/reph7gh5a85g1y6ahm90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eywUWTU7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/reph7gh5a85g1y6ahm90.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4- Select your VPC, and choose the Subnets and the security group. Then save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2QAh_NuL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmjemhf8vdv3pwp0jz7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2QAh_NuL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmjemhf8vdv3pwp0jz7n.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5- Under the VPC section, click on “Add file system” from “file System” section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M4A9na5t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kik0da12yh3y19pzm8xk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M4A9na5t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kik0da12yh3y19pzm8xk.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6- Select the EFS File System we made, remember that we gave it a descriptive name. Then choose the Access point that is associated with the Access Point and finally, give it a path.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;NOTE: This path needs to start with &lt;i&gt;‘/mnt/‘&lt;/i&gt;. You can keep it as it is or if you want to have a custom folder, defiantly you can.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A_8pZIYj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d60ocs881f7rc35jmvkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A_8pZIYj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d60ocs881f7rc35jmvkg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7- A small piece of code to test if the file system is attached to the function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--plHmGlko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hz8r8ivjlwyo821qumos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--plHmGlko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hz8r8ivjlwyo821qumos.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8- Bingo!! We made it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sdq-nWaL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1qqu57sziw0lfyz3wn6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sdq-nWaL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1qqu57sziw0lfyz3wn6g.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Conclusion:&lt;/h2&gt;

&lt;p&gt;Adding EFS to Lambda is a huge new milestone in The Serverless architecture. You can have new use-cases that will be doable and before were a nightmare to accomplish. Easy steps with the cheap price make Lambda an option that can battle EC2 in some new modern use-cases.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>serverless</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to trigger AWS Lambda by SMS?</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Sat, 18 Apr 2020 21:39:30 +0000</pubDate>
      <link>https://dev.to/imohd23/how-to-trigger-aws-lambda-by-sms-41aj</link>
      <guid>https://dev.to/imohd23/how-to-trigger-aws-lambda-by-sms-41aj</guid>
      <description>&lt;p&gt;In my last article, I have placed a teaser that whether you can trigger Lambda by SMS. Today, We are going to do that! I am really excited to share this with you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: This article's resources are not fully covered in the free tier. Also, you cannot do it directly as there are some resources you will have to raise a ticket to get and adjust services limitations.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, I have been waiting for over a month to finish this article and I postponed it because of the workload that we faced with working remotely and waiting to get the ticket resolved by AWS support team. Which by the way, they were really helpful even in the free plan support.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are the involved parts?
&lt;/h1&gt;

&lt;p&gt;There are three main involved part in this event, Customer Engagement, Application Integration and compute services to make this happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customer Engagement:
&lt;/h2&gt;

&lt;p&gt;Since we want to trigger a function via SMS, you will need a service or tool to get information from the user. Otherwise, how the function will get triggered?&lt;/p&gt;

&lt;p&gt;In this part, we will use AWS Pinpoint. This service enables you to engage with customers through different channels like emails and transactional SMSs. Also, you can validate phone numbers if they are real too!&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Integration:
&lt;/h2&gt;

&lt;p&gt;We are working with SMS, which leads us to work with SNS. SNS is a service that enables you to organize and manage SMS process. Also, it can trigger Lambda too. Did you get the idea right?&lt;/p&gt;

&lt;h2&gt;
  
  
  Compute:
&lt;/h2&gt;

&lt;p&gt;Since we want to do some computing processes for the SMS content, we will need a computing unit. The best and cheapest option is Lambda. Which is the reason for this article.&lt;/p&gt;

&lt;h1&gt;
  
  
  Diagram:
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j4ZsIriA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9ca9fvckx4iemawbjejf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j4ZsIriA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9ca9fvckx4iemawbjejf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Straight forward scenario, We will contact Pinpoint through SMS, the message will be passed to SNS, which will be responsible for triggering Lambda. No rocket science here.&lt;/p&gt;

&lt;p&gt;I am here to show you how-to not to describe the theory behind it. So, shall we begin?&lt;/p&gt;

&lt;h2&gt;
  
  
  1- Request a long/short code from Pinpoint:
&lt;/h2&gt;

&lt;p&gt;Since we want to send an SMS to Pinpoint, it is required to have a code. To obtain one, please follow the steps from the documentation &lt;a href="https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-sms-awssupport-long-code.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One point I want to bring attention to is some countries have both short and long code. But, as happened for me, which living in Kingdom of Bahrain, we have only long code, so far.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: It took a while for me to get the code as I was a basic support plan user and there is no default, fast way to obtain this code in my region. Apply for it in advance.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2- Create SNS topic to handle Pinpoint messages:
&lt;/h2&gt;

&lt;p&gt;As we mentioned in the beginning, there is no way to invoke Lambda directly from Pinpoint, creating SNS topic is a must for this purpose.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From Services, look for SNS and click on it.&lt;/li&gt;
&lt;li&gt;Open SNS console and from the left panel, select topics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iCQNUQZq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2yijxvig9axt4muxij2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iCQNUQZq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2yijxvig9axt4muxij2v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on create topic.&lt;/li&gt;
&lt;li&gt;Fill the name for the topic and keep the default values the same.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uYbODdue--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r3mmxetfjk54n5uz9e71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uYbODdue--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r3mmxetfjk54n5uz9e71.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We are done with SNS!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3- Prepare IAM role for Lambda:
&lt;/h2&gt;

&lt;p&gt;We will work with Lambda, and for that reason, let us make a proper role to consume SNS messages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open IAM console and click on Roles.&lt;/li&gt;
&lt;li&gt;Click on Create Role.&lt;/li&gt;
&lt;li&gt;Select Lambda from the use cases list and click next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9i3JZ3Ye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/od9a13wx6ohzw5czrd0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9i3JZ3Ye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/od9a13wx6ohzw5czrd0e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In attach permission policies, search for SNS.&lt;/li&gt;
&lt;li&gt;Choose Read-Only Access from the list.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aioMDP-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7139kq1otbjc8eb4daro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aioMDP-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7139kq1otbjc8eb4daro.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finish the steps by given the role a descriptive name.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4- Lambda Time!!:
&lt;/h2&gt;

&lt;p&gt;Now, we can prepare the function that will consume the received SMS. Let's start:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From Services, click on Lambda&lt;/li&gt;
&lt;li&gt;In Dashboard, click on Create Function button.&lt;/li&gt;
&lt;li&gt;Fill the needed information and do not forget to select an existing role, which is the one we created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qM9hO0qG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0wcg5o7i805uvp0mi55h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qM9hO0qG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0wcg5o7i805uvp0mi55h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In designer part, click on Add Trigger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PrCFSwZR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5zz55cb5jpla26h5kgbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PrCFSwZR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5zz55cb5jpla26h5kgbl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In trigger configuration, search for SNS and then look for the SNS topic we created earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jFJljlrm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tunpc35dl5n63zp21w75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jFJljlrm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tunpc35dl5n63zp21w75.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Done! This function will be triggered whenever SNS topic receives messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5- Pinpoint Configuration:
&lt;/h2&gt;

&lt;p&gt;So, last step is here. Since we prepared all the resources, we need to configure the part the will trigger all the previous configurations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From Services, click on Pinpoint.&lt;/li&gt;
&lt;li&gt;In Pinpoint main page, click on Manage Projects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8zDpBqMn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xj5gx6jgoedtkug6kb6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8zDpBqMn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xj5gx6jgoedtkug6kb6x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new project.&lt;/li&gt;
&lt;li&gt;Skip Project Features for now.&lt;/li&gt;
&lt;li&gt;In Project Dashboard, click on Settings -&amp;gt; SMS and Voice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O7DQmr-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n7ueyazx85eo652mnong.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O7DQmr-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n7ueyazx85eo652mnong.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In SMS and voice page, under Number settings, click on the long/short code you have.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;NOTE: This is the code you will get after you asked for in the first step.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2xk4wTne--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u6eqtl5r3i27nw8i1ig5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2xk4wTne--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u6eqtl5r3i27nw8i1ig5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go all the way down until you see Two-way SMS. Click on it.&lt;/li&gt;
&lt;li&gt;First thing, enable it.&lt;/li&gt;
&lt;li&gt;In incoming messages destination, choose an existing SNS topic, and select the one we created earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CZQmO7sR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0epqpydorkrufud5jpvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CZQmO7sR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0epqpydorkrufud5jpvp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Done! easy right?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to test?
&lt;/h2&gt;

&lt;p&gt;So, you have everything in place and we need to trigger the function. Just send an SMS to the long/short code you have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y3EVb0ox--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qo61j8l8og08fib0bbkl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y3EVb0ox--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qo61j8l8og08fib0bbkl.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To validate that the function got triggered, check Lambda logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jf1TDVhD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qc1amqytiy5qrrgnsr6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jf1TDVhD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qc1amqytiy5qrrgnsr6t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Any useful use-case for this event?
&lt;/h2&gt;

&lt;p&gt;I was thinking, what benefit I will get from triggering Lambda this way, and I guess I have a pretty good reason.&lt;/p&gt;

&lt;p&gt;Imagine that you have a marketing dashboard in an EC2 that it gets turned off after working hours. One day, a colleague from marketing department called you and asked you to turn it on for few hours as he has something urgent. Imagine with me opening your laptop if you have it with you, connect it to Wify, open the console, login, don't forget you enabled MFA, ext… Why can't you have a Lambda function that when you send a short SMS, will do the job for you! Of course, you can validate the number is yours when you try to trigger it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary:
&lt;/h1&gt;

&lt;p&gt;AWS Lambda is really great innovation. It amaze me every time I try a new thing to do with. Triggering a computing unit via SMS opens a whole new world of options and possibilities for sys admins, businesses and who like fun projects like me.&lt;/p&gt;

&lt;p&gt;Until the next time, don't forget to wash your hand and stay safe..&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>aws</category>
      <category>sms</category>
      <category>serverless</category>
    </item>
    <item>
      <title>AWS Lambda002: Triggers and Destinations</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Mon, 16 Mar 2020 10:36:43 +0000</pubDate>
      <link>https://dev.to/imohd23/aws-lambda002-triggers-and-destinations-4iha</link>
      <guid>https://dev.to/imohd23/aws-lambda002-triggers-and-destinations-4iha</guid>
      <description>&lt;p&gt;AWS Lambda is a great invention that saw the light in 2014. It follows the pattern of event-driven, which means that if an action happens, a response will be processed and presented for the entity that triggered this event, either a person or a machine.&lt;br&gt;
This leads me to the point of this article, triggers, and destinations that AWS Lambda works with. I will make another article to describe the ‘doing’ part for these triggers and destinations. So, shall we begin?&lt;/p&gt;

&lt;h1&gt;
  
  
  Triggers:
&lt;/h1&gt;

&lt;p&gt;Triggers are the events that trigger Lambda, straight forward description, correct?&lt;br&gt;
But, what are the type of triggers that invoke Lambda? Let’s explore the options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda Reads Events:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lambda can read messages from certain services. These services are generating queues that pass the messages to the function in both execution ways, synchronous and asynchronous.&lt;/p&gt;

&lt;p&gt;When the function gets triggered synchronously, the message provider service will wait for a reply from Lambda. In case of a failure, you have the option to activate retry on error, which will retry to execute the function with the same message. The reason for that retry is at some point, you might face a timeout from the function, DB connection or any other service.&lt;/p&gt;

&lt;p&gt;This execution type is great for data streaming functions. Because when we talk about the services that serve this execution type, which are Kinesis, DynamoDB, and SQS, we notice that all these services are made to batch data at a steady level. Also, as you might notice, reading events can execute the function in both ways, which is an advantage that can serve the main task from Lambda by executing in that way!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda and synchronous execution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, as you know by now is that Lambda can be triggered synchronously. How that’s happening?&lt;/p&gt;

&lt;p&gt;Imagine you’re in a bakery, and by the time you reached the baker, you asked him for bread. You will wait till the baker returns with the bread, you’ll pay and leave. This is a synchronous action that you’ve made.&lt;br&gt;
Lambda gets requests from the following list and performs synchronous execution for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elastic Load Balancer.&lt;/li&gt;
&lt;li&gt;Cognito&lt;/li&gt;
&lt;li&gt;Lex&lt;/li&gt;
&lt;li&gt;Alexa&lt;/li&gt;
&lt;li&gt;API Gateway&lt;/li&gt;
&lt;li&gt;CloudFront&lt;/li&gt;
&lt;li&gt;Kinesis Data Firehose&lt;/li&gt;
&lt;li&gt;Step Functions&lt;/li&gt;
&lt;li&gt;S3 Batch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s talk more about API Gateway since it’s one of the most common use cases in this list.&lt;/p&gt;

&lt;p&gt;You as a user called an endpoint. This endpoint has a defined inner destination, which is in our situation is Lambda, API Gateway will wait for the Lambda to finish the execution and return a status code along with or without a body to return it to the API Gateway consumer.&lt;br&gt;
Note:&lt;br&gt;
API Gateway execution limit is 30 seconds. So, if you have functions that take more than this specified time, consider the next paragraph option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda and Asynchronous execution:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We talked in brief about the synchronous execution. Now, let’s have a word about the other execution type, Asynchronous execution.&lt;/p&gt;

&lt;p&gt;A real-world example is texting, when you text someone in Whatsapp, you’re technically not blocked. The reply might take seconds, minutes or even hours. The action in this situation is sending the text message and the reply is Asynchronous. I hope you got the point.&lt;/p&gt;

&lt;p&gt;What are the services that can trigger Lambda Asynchronously?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3&lt;/li&gt;
&lt;li&gt;SNS&lt;/li&gt;
&lt;li&gt;SES&lt;/li&gt;
&lt;li&gt;CloudFormation&lt;/li&gt;
&lt;li&gt;CloudWatch (logs / events)&lt;/li&gt;
&lt;li&gt;CodeCommit&lt;/li&gt;
&lt;li&gt;Config&lt;/li&gt;
&lt;li&gt;IoT / IoT Events&lt;/li&gt;
&lt;li&gt;CodePipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let’s talk about an example that occurs by a service, S3. Imagine you’ve setup a notification trigger that when a new Put request performed in X bucket, this will trigger Lambda to get metadata and will create a thumbnail if the uploaded file is an image. This action will happen in the background and thumbnail creation won’t block you or you’re expecting to get a link of this file.&lt;/p&gt;

&lt;h1&gt;
  
  
  Destinations:
&lt;/h1&gt;

&lt;p&gt;Destination is the functionality that you can setup if you need extra steps in your function. These functions are triggered based on a defined status like in case of success, trigger this Lambda function to perform x action.&lt;/p&gt;

&lt;p&gt;Lambda Destination is a way that Lambda can perform extra steps that gives you more visibility and simplify event-driven processes.&lt;/p&gt;

&lt;p&gt;To make things clearer, Lambda Destination is an extra optional step that you define if needed. Also, make sure you understand that Destination is triggered Asynchronously only, which means that if the Lambda function got triggered synchronously or Asynchronously won’t affect the way that Destination works.&lt;/p&gt;

&lt;p&gt;There are some services that can be triggered by Destination at the time this article was written.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda&lt;/li&gt;
&lt;li&gt;SNS&lt;/li&gt;
&lt;li&gt;SQS&lt;/li&gt;
&lt;li&gt;EventBridge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why don’t we explain a real-world example?&lt;/p&gt;

&lt;p&gt;Imagine you’re triggering Lambda via API Gateway, You’re expecting either to get success or failure message. You can add this extra step in both cases, which means that if the function succeeded, send the return to another Lambda and finish the processing or send it to an SQS in case of failure.&lt;/p&gt;

&lt;p&gt;Hope that makes sense for you.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary:
&lt;/h1&gt;

&lt;p&gt;AWS Lambda has a lot of ways to get triggered that can serve the application needs. You can trigger it for short time processes that wait for reply or the function that can be triggered and left to work for longer period (max 15 min per function). As we discussed, there are three ways to trigger lambda and each one of them serves a need or follows an application pattern you decide. Also, Destination are an add-on point of processing that you can configure to work as Asynchronous to perform a status based actions.&lt;/p&gt;

&lt;p&gt;In the next article, we will have some code examples for some use cases. Stay tuned for that.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>How to process media in AWS Lambda? #Serverless</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Wed, 19 Feb 2020 23:37:00 +0000</pubDate>
      <link>https://dev.to/imohd23/how-to-process-media-in-aws-lambda-serverless-f93</link>
      <guid>https://dev.to/imohd23/how-to-process-media-in-aws-lambda-serverless-f93</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wCpGEXd1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fvzdo59goz1grq45n919.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wCpGEXd1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fvzdo59goz1grq45n919.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Last week, I got a new task that I need to make thumbnails for images and videos. It is quite easy task with Python. But as a Solutions Architect, I noticed that calling videos all the time to be shown and no one is taking any extra advantage from the calls makes me worried about the billing (S3 call, Data transfer, and other small things). Yes, you can cache links on the browser or with CloudFront, if you’re using one. But these are APIs and it follows Serverless structure. Hmm, another set of scenarios that I can reduce the billing of retrieving these media. One of them is to make a shorter version of the uploaded media if its a video! COOL, But, HOW?&lt;/p&gt;

&lt;p&gt;I used FFMPEG a lot previously, I liked it and the much of the benefits that I got from. I used it in a basic way to make smaller versions of the uploaded videos. FFMPEG is a binary-based tool, which means, it needs to be compiled in the right way to let the OS of the hosted server understand how to deal with it.&lt;/p&gt;

&lt;p&gt;All of us know Lambda limitations and how it could be painful. So, we will use Layers!&lt;/p&gt;

&lt;p&gt;I know that I talked a lot, let’s start:&lt;/p&gt;

&lt;h2&gt;
  
  
  Compile Resources:
&lt;/h2&gt;

&lt;p&gt;First thing first, compile your resources.&lt;/p&gt;

&lt;p&gt;If you’re not familiar with this process, please jump to my previous article and have a &lt;a href="https://medium.com/@mohd.lutfalla/how-to-compile-resources-for-aws-lambda-f46fadc03290"&gt;look&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’ll be using some of the steps here to make the article makes sense in terms of steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k7r3hFJG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lxjve6a670dyctuofw93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k7r3hFJG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lxjve6a670dyctuofw93.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect to Docker template that has Lambda environment. It’s almost like a clone to its OS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rWQoDskK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/je0t9zjh0wfhbqmhqror.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rWQoDskK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/je0t9zjh0wfhbqmhqror.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on the image, let me describe what I did:&lt;/p&gt;

&lt;p&gt;1- I accessed Docker image.&lt;/p&gt;

&lt;p&gt;2- I made a directory and called it python (this is the name that AWS layers recommend to name your python layer)&lt;/p&gt;

&lt;p&gt;3- Enter Python directory.&lt;/p&gt;

&lt;p&gt;4- I downloaded ffmpeg as an executable version. you can find it on this link.&lt;/p&gt;

&lt;p&gt;5- Untar the file and get ffmpeg and ffprobe to the main directory ( to python root file).&lt;/p&gt;

&lt;p&gt;4- Install the needed resources.&lt;/p&gt;

&lt;p&gt;As you can see, I compiled Pillow (Python Image Library) that you can use to process images with. The reason for that is to show you that its recommend that all Lambda resources have to be compiled for it, some dependencies could be binary-based and could cause an error in the function.&lt;/p&gt;

&lt;p&gt;After doing these steps, you should have the following files:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zlBzTg-E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0dor7lgan9n6metwn0zl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zlBzTg-E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0dor7lgan9n6metwn0zl.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Zip the directory and be prepared for the next step, which is making The layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start preparing resources on the cloud:
&lt;/h2&gt;

&lt;p&gt;Access AWS Console and navigate to S3:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0mgjHPSR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b9lx3b226zr0w2iuctfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0mgjHPSR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b9lx3b226zr0w2iuctfy.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should have a bucket so you can upload the layer.&lt;/p&gt;

&lt;p&gt;NOTES: &lt;br&gt;
&lt;em&gt;Any layer bigger than 50mb must be uploaded to S3, Copy the link, we will need it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Any layer must be less than 250mb when it’s unzipped, otherwise, the layer won’t be created.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s create a bucket:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bzuEOMge--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e5hvz1fbggbamq25km6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bzuEOMge--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e5hvz1fbggbamq25km6d.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next step, Upload the layer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U2AdMo2J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7x9cw48x0x29iilrotm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U2AdMo2J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7x9cw48x0x29iilrotm4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NOTES:&lt;br&gt;
&lt;em&gt;File should not be public, If the IAM user have the permission to access these features\tools (S3, Lambda), then, you’re in a good spot.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the file is uploaded, copy the link.&lt;/p&gt;

&lt;p&gt;Navigate to Lambda:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fdvQh_wR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mj8liztyex9nmwpfz675.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fdvQh_wR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mj8liztyex9nmwpfz675.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the left of the screen, click on layers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PDfA-CDQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lwtd91pfu41jx47w4dbi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PDfA-CDQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lwtd91pfu41jx47w4dbi.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on create a new layer and fill the following form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LC0lsoeD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d257hxio6c9zjhcuabuc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LC0lsoeD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d257hxio6c9zjhcuabuc.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1- Give it a name&lt;/p&gt;

&lt;p&gt;2- It is highly recommended to write a description for your layer, because you can have different versions of the same layer.&lt;/p&gt;

&lt;p&gt;3- Click on the second option and paste the link.&lt;/p&gt;

&lt;p&gt;4- Specify the compatible runtime, as its important for you to track the versions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gTNT16iK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3c0nc8otgerlkk69bm3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gTNT16iK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3c0nc8otgerlkk69bm3l.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cool, We’re ready to start coding:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer:&lt;/em&gt;&lt;br&gt;
&lt;em&gt;I know that the following code is not the perfect code or might people will argue about it, the reason for this only to showcase that the process is valid and working.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s go to Lambda functions and start coding:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XS4VMT-U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/p1qidu3rbhkv7a03ix9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XS4VMT-U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/p1qidu3rbhkv7a03ix9a.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve created the function and chose the runtime for it, I use Python3.6.&lt;/p&gt;

&lt;p&gt;NOTE: &lt;br&gt;
&lt;em&gt;when you’re processing media and even files, you need to make sure that your function has the right set of permissions, I updated this function’s role to have the ability to read and write to S3.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;First thing I did as you’ll notice in the below image, that I linked the layer to the function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SsSZTCIJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i6d7tjrhj68y1p96whsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SsSZTCIJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i6d7tjrhj68y1p96whsb.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the code snippet I made to download a video from S3, take a frame from the video at the 7th second and save it as png, then upload it to S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VAd43gPv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9wgm4lwnc7wt6wl05l1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VAd43gPv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9wgm4lwnc7wt6wl05l1b.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NOTE: &lt;br&gt;
&lt;em&gt;If you’re new to Lambda, _&lt;/em&gt;/tmp__ is the writable directory in Lambda._&lt;/p&gt;

&lt;p&gt;As you’ve guessed, yes it worked!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pd0uhZN7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vb95flt935mdjj8vxfza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pd0uhZN7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vb95flt935mdjj8vxfza.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion and some tips:
&lt;/h2&gt;

&lt;p&gt;This way worked perfectly with me, yes you can enhance it but its a good start point for the newcomers to Lambda and serverless world.&lt;/p&gt;

&lt;p&gt;I have some notes and recommendations for you:&lt;/p&gt;

&lt;p&gt;1- give the function all the resources that are available (timeout and Memory), test it on your scenarios and then optimize it, the reason is to see how much actually it used to process your media. You have different scenarios. So, different timeout and CPU/GPU/Internet usage would be applied.&lt;/p&gt;

&lt;p&gt;2- If you’re like me and you used libraries that uses ffmpeg link imageio, You have to declare ffmpeg path in your code, otherwise, lambda won’t see it, in this case, its in &lt;em&gt;/opt/python&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;3- Also, if you will use these libraries, make sure to make ffmpeg and ffprobe executable before you zip them and upload it. I faced this issue.&lt;/p&gt;

&lt;p&gt;4- Use S3 to download/upload media, try not to return it as a file, data transfer between services within the same region is zero dollars.&lt;/p&gt;

&lt;p&gt;Happy Serverless processing!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Lambda001: Layers</title>
      <dc:creator>Mohamed Latfalla</dc:creator>
      <pubDate>Wed, 19 Feb 2020 14:38:17 +0000</pubDate>
      <link>https://dev.to/imohd23/aws-lambda001-layers-2fl7</link>
      <guid>https://dev.to/imohd23/aws-lambda001-layers-2fl7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5J_vGGif--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/79dihwa5ikwoezkubu8q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5J_vGGif--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/79dihwa5ikwoezkubu8q.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once upon a time, we use to have lambda functions that deployment packages reach 50mib. It was dark days when we tend to test locally, zip all dependencies and wait till it becomes available for test after uploading the files. What makes it worse is when we have bad internet. Fortunately, now we have Layers!!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Lambda Layers?
&lt;/h2&gt;

&lt;p&gt;In shot sentence, Lambda Layers is an approach that makes appending extra code to Lambda function easier. It keeps the development package smaller and easy to maintain.&lt;br&gt;
Also, this extra code can be shared by different functions. Do it once, and let the linking to other functions begin.&lt;br&gt;
Didn’t get my idea? No worries, I was in your shoes before. Check the sketch:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1zip0VWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kx9jhcys1vg4g00mnhwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1zip0VWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kx9jhcys1vg4g00mnhwx.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, As we have shown, Layer is a container that compresses all your dependencies in a single file that can be consumed by multiple functions.&lt;/p&gt;

&lt;p&gt;It is a convenient way that helps to contain the smallest size of your deployment package, helps with ensuring that the dependencies are kept in the correct version. Think about it like a version control package, You update it when it's needed.&lt;/p&gt;

&lt;p&gt;Now we know roughly what is Lambda layer. Why should we use it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Layers benefits:
&lt;/h2&gt;

&lt;p&gt;Let me show you the benefits with examples, that makes more sense.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1:- One function, huge deployment package.
&lt;/h3&gt;

&lt;p&gt;I used to work with huge datasets, and my aim was to make the system I’m working on as Serverless. Because Serverless is awesome. One of my development issues was the OS, which is not the same as Lambda environment( I use OS X and Lambda uses Linux). So, I code the function, test it locally, did it worked? Cool, zip it and upload it. But wait, I forgot to remove a comma. The old approach is to fix that locally, do all the steps again because dependencies are over 50Mib, which is the maximum size lambda lets you edit code online.&lt;/p&gt;

&lt;p&gt;Imagine the loss in effort and time in this scenario. Thankfully, You can have the dependencies(example: Pandas) in a layer and reference it. The job is done!&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 2:- One function, different dependencies.
&lt;/h3&gt;

&lt;p&gt;I tend to organize my layers based on types like having one layer for data manipulation, one for media manipulation/conversion and one for static functions (like input validation, function response messages, and DB connection functions). In this case, having one big layer is harder to maintain and puts you under threat of reaching the limit.&lt;/p&gt;

&lt;p&gt;So, the best way is to split dependencies into different layers and link what is needed.&lt;/p&gt;

&lt;p&gt;The same function I mentioned in the first example needs to validate the user inputs (layer 3) and then work with the data (layer 1). In this case, I can manage what gets executed when this function gets triggered. Another function usage is to make thumbnails (layer 2) and then update the account table in the database (layer 3). You can see the benefits right?&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 3:- Version control.
&lt;/h3&gt;

&lt;p&gt;At some stages in development, some of your dependencies get an update, this update might contain some functions deprecations. This is not cool when you’re relying on it to do certain actions. You have a deadline and rules states that code upgrade can’t be done at this moment. Having your dependencies in a layer insure that these libraries won’t be updated until you do that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layers downside:
&lt;/h2&gt;

&lt;p&gt;Sure, almost everything could have downsides. And, it might not affect you. But, keeping it in mind won’t hurt.&lt;/p&gt;

&lt;p&gt;I can summarize mine endures with layers in two points: creating and updating layers.&lt;/p&gt;

&lt;p&gt;The reason that makes Layers is great is the same reason that makes it time-consuming and hard to maintain.&lt;/p&gt;

&lt;p&gt;I use AWS SAM every day in my job. I push new functions and fix others. Sometimes, new functions need new libraries, to be able to use it is whether to create a new layer or update an existed one. All this needs to be a manual job.&lt;/p&gt;

&lt;p&gt;Compile resources (since I’m working on OS X), zip it, upload it, get the link, create a new layer, reference it in the template and push it. that’s A LOT!&lt;/p&gt;

&lt;p&gt;Another downside could be the public understanding of the layer version. Basically, you can have a layer that has different versions, you can reference any version you want. But, then you delete a specific version, what will happen in that case?&lt;/p&gt;

&lt;p&gt;When you have a layer and you reference it in your function, saving this function will create some sort of a container that has the function and its dependencies. Now, its an independent entity. Try to delete the referenced version, add an empty single line to your main function and save this change, then you’ll have an issue that these dependencies are gone. Because it will recreate this container and the declared version is not exists anymore. This will lead to either upload the same layer version from a backup or updating the function accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layers limits:
&lt;/h2&gt;

&lt;p&gt;There are two main limitations you need to keep in mind, size, and layers per function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Size:
&lt;/h3&gt;

&lt;p&gt;You should know that layer size unzipped should not pass 250Mib. Having files that are bigger than that will lead to functional failure. This function won’t work at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layers per function:
&lt;/h3&gt;

&lt;p&gt;You can’t reference more than 5 layers for each function. Also, size limitation applies to this too. These layers all together unzipped must follow the 250Mib rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary:
&lt;/h2&gt;

&lt;p&gt;AWS Lambda Layers is a great addition to the Serverless architecture. Now, we can code, test and deploy heavy functions (in terms of purpose) in small and easy ways thanks to the sharing dependencies that Layers provide. But, and it's a big but here, use it wisely. Try to understand your application well, organize these layers based on purpose. Lambda Layers is a managed service from AWS, you should expect hard limits and you need to be flexible.&lt;/p&gt;

&lt;p&gt;Note: if you want to know how to deploy your first Layer, please click &lt;a href="https://medium.com/@mohd.lutfalla/how-to-process-media-in-aws-lambda-ffmpeg-f53491cf8768"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>layers</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
  </channel>
</rss>
