<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fatima Medlij</title>
    <description>The latest articles on DEV Community by Fatima Medlij (@medlij).</description>
    <link>https://dev.to/medlij</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/medlij"/>
    <language>en</language>
    <item>
      <title>Level Up Your Lambda Deployments with Serverless Framework</title>
      <dc:creator>Fatima Medlij</dc:creator>
      <pubDate>Tue, 03 Oct 2023 08:53:28 +0000</pubDate>
      <link>https://dev.to/medlij/level-up-your-lambda-deployments-with-serverless-framework-55fe</link>
      <guid>https://dev.to/medlij/level-up-your-lambda-deployments-with-serverless-framework-55fe</guid>
      <description>&lt;p&gt;Serverless Framework enables the creation of project stages for deployment. These stages serve as valuable tools for establishing separate environments dedicated to testing and development purposes. Typically, a staging environment is generated as a distinct replica of the production setup, facilitating comprehensive testing and verification of the code before final deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oVmGuL8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6wiq00kpuknp0ezi0dr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oVmGuL8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6wiq00kpuknp0ezi0dr.png" alt="Image description" width="786" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this piece, we’ll focus on setting up multiple stages, “live” and “dev” environments, for deployment. The “dev” stage is typically used for testing and development purposes. The “live” stage is used for deploying the application to the live/production AWS account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b0wi_xeW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9awolbz2ju39f9snuaop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b0wi_xeW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9awolbz2ju39f9snuaop.png" alt="Image description" width="645" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The development process for deploying Lambda functions can be outlined as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Development and Local Testing&lt;/strong&gt;&lt;br&gt;
During this stage, developers write the Lambda function code and perform thorough testing locally to ensure its correctness and efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment to “dev” Testing Environment&lt;/strong&gt;&lt;br&gt;
Once the code is deemed suitable for production, it is deployed to the “dev” stage within the testing environment. Here, Quality Assurance (QA) teams can thoroughly test the Lambda function to identify any potential issues or bugs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confirmation and Deployment to “live” Stage&lt;/strong&gt;&lt;br&gt;
After successful testing, the Lambda function is deployed to the “live” stage. At this point, it becomes accessible to end-users.&lt;/p&gt;

&lt;p&gt;Serverless architecture simplifies the process of deploying Lambda functions to different stages. There are two viable options to achieve this:&lt;/p&gt;
&lt;h2&gt;
  
  
  Single AWS Account with Naming Convention
&lt;/h2&gt;

&lt;p&gt;In this approach, a single AWS account is used for both development/testing and production environments. The differentiation between the two stages is primarily achieved through the naming convention of lambda functions. Specifically, functions intended for the “dev” stage are named accordingly, while those intended for the “live” stage have different naming conventions.&lt;/p&gt;

&lt;p&gt;In your Lambda serverless.yml add the following (default stage is “dev”)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: testing-lambda

provider:ya
  name: aws
  runtime: nodejs16.x
  region: ur-region
  stage: ${opt:stage, 'dev'}
  environment:
    host: ${self:custom.database.${self:provider.stage}.host}
    user: ${self:custom.database.${self:provider.stage}.username}
    password: ${self:custom.database.${self:provider.stage}.password}

functions:
  func_name:
    handler: handler.func_name
    role: ${self:custom.lambdaRole.${self:provider.stage}}

custom:
  lambdaRole:
    dev: arn:aws:iam::123456789:role/dev-lambdaRole
    live: arn:aws:iam::123456789:role/live-lambdaRole

  database:
    dev:
      host:"testing-cluster.ur-region.rds.amazonaws.com"
      username: "username-dev"
      password: "pwd-dev" 
    live:
      host:"live-cluster.ur-region.rds.amazonaws.com"
      username: "username-live" 
      password: "pwd-live" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to use your lambda to the “live” stage add the --stage live option.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;serverless invoke local -f func_name -- stage live&lt;br&gt;
serverless deploy -- stage live&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Separate AWS Accounts for Testing and Production
&lt;/h2&gt;

&lt;p&gt;Alternatively, you can opt for having two separate AWS accounts — one dedicated to testing and the other for production. In this setup, “dev” or testing Lambda functions are deployed to the designated AWS testing account. Once they pass all necessary tests and validations, they are then published on the live/production AWS account, making them accessible to end-users.&lt;/p&gt;

&lt;p&gt;To use stages across different accounts you have to:&lt;/p&gt;

&lt;p&gt;Setup the testing account by running aws configure with --profile option and a name for the new profile then add the necessary details.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws configure --profile testing&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In your Lambda serverless.yml use the following template&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: testing-lambda

provider:
  name: aws
  runtime: nodejs16.x
  region: ur-region
  stage: ${opt:stage, 'dev'}
 # new line here 
  profile: ${self:custom.awsProfile.${self:provider.stage}}
  environment:
    host: ${self:custom.database.${self:provider.stage}.host}
    user: ${self:custom.database.${self:provider.stage}.username}
    password: ${self:custom.database.${self:provider.stage}.password}

functions:
  func_name:
    handler: handler.func_name
    role: ${self:custom.lambdaRole.${self:provider.stage}}

custom:
  lambdaRole:
    dev: arn:aws:iam::123456789:role/dev-lambdaRole
    live: arn:aws:iam::123456789:role/live-lambdaRole

  database:
    dev:
      host:"testing-cluster.ur-region.rds.amazonaws.com"
      username: "username-dev"
      password: "pwd-dev" 
    live:
      host:"live-cluster.ur-region.rds.amazonaws.com"
      username: "username-live" 
      password: "pwd-live"

# new lines here
   awsProfile:
     dev: testing
     live: default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the serverless framework will detect the AWS account and use it to deploy, your “live” and “dev” lambdas will be on different accounts.&lt;/p&gt;

&lt;p&gt;It is essential to carefully consider the specific needs and requirements of the project to choose the most appropriate approach for deploying Lambda functions to different stages effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
The Serverless Framework offers efficient solutions for deploying Lambda functions to distinct stages, facilitating testing and development. Whether using a single AWS account with naming conventions or separate accounts for testing and production, developers can streamline the deployment process while maintaining high-quality applications.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>devops</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Automating Post-SQS Tasks with EventBridge</title>
      <dc:creator>Fatima Medlij</dc:creator>
      <pubDate>Thu, 17 Aug 2023 08:57:10 +0000</pubDate>
      <link>https://dev.to/medlij/automating-post-sqs-tasks-with-eventbridge-2bg0</link>
      <guid>https://dev.to/medlij/automating-post-sqs-tasks-with-eventbridge-2bg0</guid>
      <description>&lt;p&gt;I had to run a job as soon as my sqs finished processing the messages, and i wanted to automate it. So in this article, I'm going to share the architectural design of the solution I made.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zmRzeE9P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/649yy3av10lnanixsqs5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zmRzeE9P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/649yy3av10lnanixsqs5.png" alt="Image description" width="720" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tZ5w-2vY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/000le7ybmge040y6uoy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tZ5w-2vY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/000le7ybmge040y6uoy2.png" alt="Image description" width="720" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do we know that the SQS is empty&lt;/strong&gt;&lt;br&gt;
These SQS attributes have to be null for several minutes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ApproximateNumberOfMessagesDelayed.&lt;/li&gt;
&lt;li&gt;ApproximateNumberOfMessagesNotVisible.&lt;/li&gt;
&lt;li&gt;ApproximateNumberOfMessages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;which can be observed on CloudWatch Dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w1dWYZY6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nk2mp7j2764p8lmk87oa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w1dWYZY6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nk2mp7j2764p8lmk87oa.png" alt="Image description" width="720" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a function that will check these metrics&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async checkIfSQSEmpty(queueurl) {
    var params = {
      QueueUrl: queueurl,
      AttributeNames: [
        "ApproximateNumberOfMessagesDelayed",
        "ApproximateNumberOfMessagesNotVisible",
        "ApproximateNumberOfMessages",
      ],
    };
    var isempty = sqs
      .getQueueAttributes(params)
      .promise()
      .then((data) =&amp;gt; {
        var count =
          Number(data.Attributes.ApproximateNumberOfMessagesDelayed) +
          Number(data.Attributes.ApproximateNumberOfMessagesNotVisible) +
          Number(data.Attributes.ApproximateNumberOfMessages);
        if (count == 0) return true;
        else return false;
      });
    return await isempty;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create Functions to Enable/Disable Scheduler&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async enableEventBridgeScheduler(schedulename, sqsurl, invocationtime) {
    var params = await this.getSchedule(schedulename);
    params.State = "ENABLED";
    params.StartDate = invocationtime;
    // 3600 is 1 hour in Epoch seconds
    params.EndDate = invocationtime + duration * 3600; 
    params.Target.Input =
      '{"SQSUrl":"' + sqsurl + '","InvocationTime":' + invocationtime + "}";
    console.log(params);
    var result = await scheduler
      .updateSchedule(params)
      .promise()
      .then((data) =&amp;gt; {
        return data;
      });
    return await result;
  }

  async disableEventBridgeScheduler(schedulename) {
    var params = await this.getSchedule(schedulename);
    params.State = "DISABLED";
    params.Target.Input = "{}";
    params.StartDate = null;
    params.EndDate = null;
    var result = await scheduler
      .updateSchedule(params)
      .promise()
      .then((data) =&amp;gt; {
        return data;
      });
    return await result;
  }

  async getSchedule(schedulename) {
    var params = { Name: schedulename };
    var result = await scheduler
      .getSchedule(params)
      .promise()
      .then((data) =&amp;gt; {
        var temp = {};
        temp.FlexibleTimeWindow = data.FlexibleTimeWindow;
        temp.Name = data.Name;
        temp.ScheduleExpression = data.ScheduleExpression;
        temp.ScheduleExpressionTimezone = data.ScheduleExpressionTimezone;
        temp.Target = data.Target;
        return temp;
      });
    return result;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Trigger for First Invocation&lt;/strong&gt;&lt;br&gt;
After you fill your SQS call the lambda function whether by api or invoked from another function, and differentiate between that and the invocation from EventBridge Scheduler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"use strict";
const SQSServices = require("./sqs");
const sqs = new SQSServices();
const SchedulerServices = require("./scheduler");
const scheduler = new SchedulerServices();
const LambdaServices = require("./lambda");
const lambda = new LambdaServices();

const schedulename = process.env.SchedulerName;
module.exports.main = async (event) =&amp;gt; {
  var response = { SQSEmpty: "", SQSName: "", SchedulerState: "" };
  var invocationtime = null;
  var data = event;

  if (event.body) { /*from ApiGateWay =&amp;gt; First Invocation*/
    data = JSON.parse(event.body);
    var sqsurl = data.SQSUrl;
    /*divide by 1,000 to make epoch in s (api in ms) */
    invocationtime = ((event.requestContext.timeEpoch) + 1800) / 1000 
    await scheduler.enableEventBridgeScheduler(schedulename,sqsurl,invocationtime);
    response.SchedulerState = 'ENABLED';
    var url = sqsurl.split("/");
    response.SQSName = url[url.length - 1];
    response.SQSEmpty = await sqs.checkIfSQSEmpty(sqsurl);
    return response;
  } 

  if (event.InvocationTime) invocationtime = event.InvocationTime;  /*from EventBridge*/
  var sqsurl = data.SQSUrl;
  console.log(sqsurl)
  var url = sqsurl.split("/");
  response.SQSName = url[url.length - 1];
  response.SQSEmpty = await sqs.checkIfSQSEmpty(sqsurl);

  if (response.SQSEmpty == true) {
    // Actions if SQS batch is done

    // Disable Scheduler
    await scheduler.disableEventBridgeScheduler(schedulename);
    response.SchedulerState = 'DISABLED';
    return response;

  } else if (response.SQSEmpty == false) {
    await scheduler.enableEventBridgeScheduler(schedulename,sqsurl,invocationtime);
    response.SchedulerState = 'ENABLED';
    return response;
  }

  return response;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>sqs</category>
      <category>aws</category>
      <category>lambda</category>
      <category>scheduler</category>
    </item>
    <item>
      <title>Handling Large AWS SQS Messages Using Amazon S3</title>
      <dc:creator>Fatima Medlij</dc:creator>
      <pubDate>Wed, 16 Aug 2023 14:16:04 +0000</pubDate>
      <link>https://dev.to/medlij/handling-large-aws-sqs-messages-using-amazon-s3-bl2</link>
      <guid>https://dev.to/medlij/handling-large-aws-sqs-messages-using-amazon-s3-bl2</guid>
      <description>&lt;p&gt;In Amazon Simple Queue Service (SQS), messages have a maximum size limit of 256 KB. If you want to push a larger message, you can use Amazon S3. The main idea is to put the large object in S3 and push an SQS message with the object’s S3 key. This article will show you how to manage large SQS messages using Amazon S3 in Node.js.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cA5xI_Nd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0m4ttq0smkiby2582gwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cA5xI_Nd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0m4ttq0smkiby2582gwl.png" alt="Image description" width="554" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the code snippet below, we define a function PushToSQS that pushes messages to an SQS queue. If the message size is greater than 200 KB, the message is stored in Amazon S3 and an SQS message with the object's S3 key is sent. We use the SendMessageCommand to send the SQS message and PutObjectCommand to store the object in S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { SQSClient, SendMessageCommand } = require("@aws-sdk/client-sqs");
const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3");
const { v4: uuidv4 } = require("uuid");

const BucketName = `your-bucket-name`;
const BucketPath = `your-path`;

async function PushToSQS(data, sqsUrl) {
  const result = { 'Response': '', 'Details': '', 'ThroughS3': false };
  let sendToS3 = false;
  const sqsParams = {
    MessageBody: JSON.stringify(data),
    QueueUrl: sqsUrl,
    DelaySeconds: 3,
  };

  const roughObjSize = JSON.stringify(data).length;
  if (roughObjSize &amp;gt; 200000) {
    sendToS3 = true;
    result.ThroughS3 = true;
    const objectKey = `${uuidv4()}`;
    sqsParams.MessageBody = `s3://${BucketName}/${BucketPath}/${objectKey}`;
  }

  const sqsClient = new SQSClient({ region: "eu-central-1" });
  const sqsCommand = new SendMessageCommand(sqsParams);
  const sqsResponse = await sqsClient.send(sqsCommand);

  if (sendToS3) {
    const s3Params = {
      Bucket: BucketName,
      Key: `${BucketPath}/${objectKey}`,
      Body: JSON.stringify(data),
    };
    const s3Client = new S3Client({ region: "eu-central-1" });
    const s3Command = new PutObjectCommand(s3Params);
    const s3Response = await s3Client.send(s3Command);
  }
  if (sqsResponse.MessageId) result.Response = 'Success';
  else result.Response = 'Error';

  return result;
}

await PushToSQS(myObject, sqsUrl);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your Lambda function (invoked by SQS), you should handle the different types of SQS messages. The code for this is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { S3Client, GetObjectCommand } = require("@aws-sdk/client-s3");
const s3 = new S3Client();

exports.Handler = async (event) =&amp;gt; {
  let myMessage;
  let requestBody = event.Records[0].body;
  const prefix = requestBody.slice(0, 2);
  if (prefix === "s3") {
    // Message in S3 object
    const key = requestBody.split("/your-path/")[1];
    const params = {
      Bucket: "your-bucket-name",
      Key: "your-path/" + key,
    };
    myMessage = await s3.send(new GetObjectCommand(params)).then((data) =&amp;gt; {
      return JSON.parse(data.Body.toString("utf-8"));
    });
    return myMessage;
  } else {
    // Message in SQS body
    myMessage = requestBody;
  }

  // Handle the message based on its content
  if (myMessage.someProperty === "someValue") {
    // do something
  } else if (myMessage.someOtherProperty === "someOtherValue") {
    // do something else
  } else {
    // handle other cases
  }

  return;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code, we are using the AWS SDK v3 for Node.js to handle incoming messages from SQS. The function is triggered when a message is received and it checks if the message is stored in S3 or not. If the message is stored in S3, it retrieves the message using the S3Client and GetObjectCommand. Once the message is retrieved, it is parsed from JSON and returned.&lt;/p&gt;

&lt;p&gt;If the message is not stored in S3, it is assumed that it is stored directly in the message body of the SQS message. The message is then stored in the myMessage variable and can be handled based on its content. In the example code, we are checking the value of certain properties of the message and performing different actions based on those values.&lt;/p&gt;

&lt;p&gt;Conclusion Managing large messages in SQS can be challenging, especially when the messages exceed the 256 KB limit. By using Amazon S3 to store large messages and passing their keys in SQS messages, we can easily handle large messages in SQS with the help of the AWS SDK v3 for Node.js.&lt;/p&gt;

</description>
      <category>sqs</category>
      <category>s3</category>
      <category>lambda</category>
      <category>node</category>
    </item>
  </channel>
</rss>
