<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rohan</title>
    <description>The latest articles on DEV Community by Rohan (@rohanmehta_dev).</description>
    <link>https://dev.to/rohanmehta_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rohanmehta_dev"/>
    <language>en</language>
    <item>
      <title>Scheduled Task Processing with DynamoDB and Eventbridge</title>
      <dc:creator>Rohan</dc:creator>
      <pubDate>Tue, 19 Jan 2021 15:09:05 +0000</pubDate>
      <link>https://dev.to/rohanmehta_dev/scheduled-task-processing-with-dynamodb-and-eventbridge-439c</link>
      <guid>https://dev.to/rohanmehta_dev/scheduled-task-processing-with-dynamodb-and-eventbridge-439c</guid>
      <description>&lt;p&gt;Previously I covered how to do real-time task processing with DynamoDB. Data goes into the table and then we use DynamoDB Streams to take action on it immediately. However, we won't be able use DynamoDB Streams for use cases where we need to process the inserted task at some point in the future. This blog post will cover some techniques I've used to address these kinds of use cases.&lt;/p&gt;

&lt;h4&gt;
  
  
  Eventbridge
&lt;/h4&gt;

&lt;p&gt;Eventbridge (Formerly known as Cloudwatch Events) can be used as Lambda trigger to invoke a function at a regularly defined time interval. If we wanted to invoke the function once every hour, once every day, or once every week, we could create an Eventbridge Rule. We'll be using them to process tasks in our DynamoDB table at a regularly scheduled interval. &lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda Functions
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Task Processor
&lt;/h5&gt;

&lt;p&gt;This Lambda function will query the DynamoDB table and collect the tasks we need to process at a given time and then execute the tasks.&lt;/p&gt;

&lt;h5&gt;
  
  
  Task Scheduler
&lt;/h5&gt;

&lt;p&gt;This Lambda function will set the taskDateTime attribute for tasks as they're inserted into the table. You can write custom logic and DynamoDB Streams to define the taskDateTime for each task as it is inserted into the table. &lt;/p&gt;

&lt;h4&gt;
  
  
  DynamoDB Table
&lt;/h4&gt;

&lt;p&gt;Our DynamoDB table will be loaded with tasks that need to be processed at a specified time period. Here's an example schema for inspiration.&lt;/p&gt;

&lt;p&gt;If we need to query the table based on a unique ID for each task, we could set the primary key to a uuid or another unique identifier like a phone number or email address. One example of this would be a de-duplication check from the application loading data into the table to make sure we're not inserting the same task multiple times. &lt;/p&gt;

&lt;p&gt;We'd then use a GSI in order to query tasks for a specific date or time. The GSI's primary key would be set to a "taskDateTime" attribute that would be the time it needs to be executed. &lt;/p&gt;

&lt;p&gt;Side Note: If we knew that we didn't have to conduct a deduplication check as one of our access patterns, then taskDateTime could be the primary key of the table. This would remove the cost incurred by setting up a GSI.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Primary Key&lt;/th&gt;
&lt;th&gt;taskDateTime (GSI Primary Key)&lt;/th&gt;
&lt;th&gt;Additional Attributes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Task 1&lt;/td&gt;
&lt;td&gt;123&lt;/td&gt;
&lt;td&gt;2021-12-31#12:00&lt;/td&gt;
&lt;td&gt;etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Task 2&lt;/td&gt;
&lt;td&gt;456&lt;/td&gt;
&lt;td&gt;2021-12-31#12:00&lt;/td&gt;
&lt;td&gt;etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Task 3&lt;/td&gt;
&lt;td&gt;789&lt;/td&gt;
&lt;td&gt;2021-12-31#03:00&lt;/td&gt;
&lt;td&gt;etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The Task Processor Lambda function would query this GSI to gather the tasks it needs to execute. At 3:00 AM on 2021-12-31, it'll only retrieve and execute Task 3 since that's the only task in the database that's scheduled to be executed.&lt;/p&gt;

&lt;p&gt;One &lt;strong&gt;important&lt;/strong&gt; design consideration to note is that the taskDateTime values should be set to a date/time that will be picked up by the Eventbridge Rule schedule. For example, if Eventbridge is set to invoke the Task Processor function every 30 mins (12:00, 12:30, 1:00 etc) we do not want to have any tasks with taskDateTime set to times outside of that schedule, or else they won't be picked up by the Task Processor.&lt;/p&gt;

&lt;h4&gt;
  
  
  Patterns
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Pre-Insertion Scheduling
&lt;/h5&gt;

&lt;p&gt;We'll use a Lambda function to query the DynamoDB table and collect the tasks we need to process at a given time and then execute the tasks. This approach relies on the system loading tasks into DynamoDB specifying the taskDateTime value.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7rd7xnbzmhuvjtqig4yh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7rd7xnbzmhuvjtqig4yh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Post-Insertion Scheduling
&lt;/h5&gt;

&lt;p&gt;You may find yourself in a situation where the application loading data into DynamoDB either cannot set the taskDateTime attribute or doesn't know when the task should be processed. We can use the Task Scheduler Lambda function to add a taskDateTime value for each inserted record and then have the Task Processor Lambda function execute the task at the specified time.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flixifj5k2gxlicz6ge9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flixifj5k2gxlicz6ge9s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Thanks for reading this post, please feel free to leave your questions or feedback in the comments.&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>lambda</category>
      <category>eventbridge</category>
    </item>
    <item>
      <title>Processing Data in Real-Time with DynamoDB Streams</title>
      <dc:creator>Rohan</dc:creator>
      <pubDate>Sun, 17 Jan 2021 16:10:00 +0000</pubDate>
      <link>https://dev.to/rohanmehta_dev/processing-data-in-real-time-with-dynamodb-streams-1pmb</link>
      <guid>https://dev.to/rohanmehta_dev/processing-data-in-real-time-with-dynamodb-streams-1pmb</guid>
      <description>&lt;p&gt;Processing data in real-time is a common use case in serverless applications. Enabling DynamoDB Streams on a DynamoDB table opens up the possibility to process events (Inserts, Updates and Deletes) in real-time. Rather than having to poll a database for changes from user-written code AWS will handle the polling and push the data changes to the a Lambda function for processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhn0rtvznr5qfmrsdif6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhn0rtvznr5qfmrsdif6l.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What goes into the Stream?
&lt;/h3&gt;

&lt;p&gt;Every type of data event - Inserts, Modifications, and Deletes, is captured as part of a "stream record". The stream record contains the type of data event and the affected data. You can configure the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_StreamSpecification.html" rel="noopener noreferrer"&gt;StreamViewSpecification&lt;/a&gt; when enabling a stream on a table to what's most suitable for your use case. Typically I go for "New and Old images" to ensure I'll be able to capture the most information about changes in the table.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Archiving Data from DynamoDB to S3:
&lt;/h4&gt;

&lt;p&gt;Storing frequently accessed data in DynamoDB is a great idea. Infrequently accessed data, not so much. If you know your access pattern no longer requires quick access to a subset of records, those records are perfect candidates for archival. One way to designate when records should be archived is specifying a &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html" rel="noopener noreferrer"&gt;time-to-live&lt;/a&gt; value when inserting data into the table. AWS will automatically delete that record from the table within 48 hours of the specified time. If you need time-sensitive deletion, it may be better to set up a Cloudwatch Event + Lambda function to regularly query your table for items that should be deleted based on some given parameters.&lt;/p&gt;

&lt;h4&gt;
  
  
  Data Replication
&lt;/h4&gt;

&lt;p&gt;If you need to copy records from a DynamoDB table into another database, you can set up a Lambda function to capture inserts into the DynamoDB table and write the contents of those records into the second table. If you need to mirror the DynamoDB table with the second table, you can expand the replication rules to capture modification and delete data events too.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sending messages to employees or customers
&lt;/h4&gt;

&lt;p&gt;As covered in a previous &lt;a href="https://dev.to/rohanmehta_dev/architecture-pattern-for-sms-campaigns-on-aws-17i4"&gt;article&lt;/a&gt;, I used DynamoDB as part of an application that would send out SMS messages to customers whose contact info was stored in DynamoDB. The business requirement required that we sent out an SMS in real-time so DynamoDB Streams were a perfect fit for the use case. (Thanks real-time processing!) &lt;/p&gt;

&lt;h3&gt;
  
  
  Controller Patterns
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The Simple Controller
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ficod54k88dl7cela04ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ficod54k88dl7cela04ja.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
The Simple Controller is meant for simpler use cases where you may only be taking one or two actions on data events from a table. In this scenario, you can use a single Lambda function to process your stream data.&lt;/p&gt;

&lt;p&gt;This is a great way to start out with DynamoDB streams when your use cases are limited in nature and you don't have multiple developers writing code for the same Lambda function. If you find yourself or your team struggling with adding new functionality to the Lambda function as time passes, it's worth considering the next option.&lt;/p&gt;
&lt;h4&gt;
  
  
  The Complex Controller
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr0va2n0kg1edctojephq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr0va2n0kg1edctojephq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;
  Mobile View
  &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdrwlk3vrqc2szqg74wav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdrwlk3vrqc2szqg74wav.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
 

&lt;/p&gt;

&lt;p&gt;The Complex Controller decouples your stream controller and stream processor actions from each other another and may be suitable for larger applications and teams. Code changes can happen in parallel without affecting the code that processes other data event types. Additionally, each Stream Processor Function can be configured and scaled separately.&lt;br&gt;
&lt;strong&gt;The Stream Controller:&lt;/strong&gt;&lt;br&gt;
You still have a single Lambda function (&lt;strong&gt;The Stream Controller&lt;/strong&gt;) that will be invoked by the DynamoDB stream.  However instead of this Lambda function taking actions upon the data in the stream, the stream controller will send the stream data to an SNS Topic with a specified &lt;a href="https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html" rel="noopener noreferrer"&gt;message filter attribute&lt;/a&gt;.&lt;br&gt;
&lt;strong&gt;The Stream Router:&lt;/strong&gt;&lt;br&gt;
We'll use an SNS Topic to fan-out the messages to our Stream Processors. The SNS Topic will only pass on the message to the functions subscribed for a particular message filter attribute value. If there's a Lambda function subscribed to the topic looking for Insert events, it'll only receive those kinds of events. Functions that are subscribed to Delete events won't receive Insert events. If the SNS topic finds a match in its subscribers, it'll route the message down to that Stream Processor.&lt;br&gt;
&lt;strong&gt;The Stream Processors:&lt;/strong&gt;&lt;br&gt;
The Stream Processors will then act on the stream data. As I mentioned earlier, each stream processor can be scaled separately based on the behavior of the system its interacting with. For example, if you get throttled by a downstream service when sending too many requests at one time, you can set up SNS Topic -&amp;gt; SQS Queue -&amp;gt; Lambda Function with Reserved Concurrency enabled to reduce the number of concurrent requests made. For more on that, check out this great &lt;a href="https://www.serverless.com/blog/aws-lambda-sqs-serverless-integration" rel="noopener noreferrer"&gt;post&lt;/a&gt; by Alex DeBrie. &lt;/p&gt;

&lt;p&gt;The added flexibility from the Complex Controller does have tradeoffs. You'll be charged for the additional Lambda invocation and SNS message, so it's worth evaluating if the approach is worth the cost. Based on my own experience paying the extra usage costs made sense for complex use cases since it was simpler to test and troubleshoot the decoupled Stream Processors but YMMV.&lt;/p&gt;

&lt;h3&gt;
  
  
  Controller Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Simple Controller&lt;/th&gt;
&lt;th&gt;Complex Controller&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stream Processing&lt;/td&gt;
&lt;td&gt;1 or 2 simpler actions&lt;/td&gt;
&lt;td&gt;3+ actions, complex data processing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;Charged for 1 Lambda invocation&lt;/td&gt;
&lt;td&gt;Charged for 2 Lambda invocations and SNS Message&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintainability&lt;/td&gt;
&lt;td&gt;Easy to maintain with limited functionality&lt;/td&gt;
&lt;td&gt;Easier to maintain with expanded functionality&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Processor Scaling&lt;/td&gt;
&lt;td&gt;Every event type is processed with the same settings for scaling/concurrency&lt;/td&gt;
&lt;td&gt;Stream Processors can be scaled based on expected load and limitations of downstream systems&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  What if I don't want to process my data until later?
&lt;/h3&gt;

&lt;p&gt;If you need to hold off on processing the data for up to 15 minutes, I'd advise trying out &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html" rel="noopener noreferrer"&gt;SQS Delay Queues&lt;/a&gt;. The Stream Controller Lambda can insert a message into an SQS queue with a delay enabled and the stream record can be processed after the specified delay period.&lt;br&gt;
If your stream data needs to be processed the next day or some other time after that, I wouldn't recommend using DynamoDB Streams or SQS Message Delivery Delays as those are more suitable for real-time or near-real-time use cases. I'll be covering how I processed these kinds of records in a future article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Thanks for reading this post, please feel free to leave your questions or feedback in the comments. I noticed that the dev.to mobile app didn't have a great way to view images in landscape, so I created a vertical view of the Complex Controller diagram, please let me know if you liked that view or if there's another way you think the image could be better displayed in the mobile app.&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>lambda</category>
      <category>s3</category>
    </item>
    <item>
      <title>SMS Campaigns on Amazon Pinpoint - One Time Send</title>
      <dc:creator>Rohan</dc:creator>
      <pubDate>Mon, 11 Jan 2021 04:36:57 +0000</pubDate>
      <link>https://dev.to/rohanmehta_dev/architecture-pattern-for-sms-campaigns-on-aws-17i4</link>
      <guid>https://dev.to/rohanmehta_dev/architecture-pattern-for-sms-campaigns-on-aws-17i4</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Amazon Pinpoint is a service that allows SMS's to be sent programmatically via the AWS-SDK or Boto3 libraries. (Psst - think of it like AWS's own Twilio)&lt;br&gt;
I've been using Pinpoint for a year at work and wanted to share some approaches, tips, and best practices that I've learned in building applications with it. This article will cover the approach I call the "One Time Send". In this approach, we'll send out a single SMS at a time to a customer and collect the SMS delivery result.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basics
&lt;/h3&gt;

&lt;p&gt;This article assumes that you'll know the basics of AWS and building serverless applications using Lambda functions.&lt;/p&gt;

&lt;p&gt;I like to use Serverless Framework to deploy my infrastructure and Lambda function code, but any of the big IaC frameworks can be used.&lt;/p&gt;

&lt;p&gt;For drawing architecture diagrams, I enjoy using &lt;a href="//draw.io"&gt;draw.io&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  One Time Send Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F54c0srybktb1ft0k2bb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F54c0srybktb1ft0k2bb7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Campaign Data Sources
&lt;/h3&gt;

&lt;p&gt;Campaign Data Sources are inputs to your SMS campaign.  Here are a few examples of data sources that can be used to trigger SMS Campaigns:&lt;/p&gt;

&lt;p&gt;External Integrations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CRM Systems: Customer accounts and cases in CRM systems can receive targeted SMS messages based on events / values in the CRM system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Web Form Submissions: Customers can fill out web forms and receive an SMS confirming their submission.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS Services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;DynamoDB: Updates made to a DynamoDB table as part of a workflow in a new or existing application can be brought in as a data source for SMS campaigns. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Manual File Uploads: Users can upload lists of SMS recipients in CSV Format to an S3 Bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Required Fields
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Phone Number&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Optional Fields
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Case / Account ID (For CRM Systems)&lt;/li&gt;
&lt;li&gt;Message Type&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Validate Input and Load Task Database
&lt;/h3&gt;

&lt;p&gt;This step is meant to validate the input from our Campaign Data Source and load a new "task" into the Database. Each task is another SMS we need to send out.&lt;/p&gt;

&lt;p&gt;External Integrations&lt;/p&gt;

&lt;p&gt;Most External Integrations will require us to set up an API Gateway endpoint to connect with the rest of our application. API Gateway can then pass the data to a Lambda function to perform the validation and load steps. &lt;/p&gt;

&lt;p&gt;AWS Services&lt;/p&gt;

&lt;p&gt;We can take advantage of Lambda triggers to natively integrate our AWS resources acting as Data Sources. Keep in mind that SQS can be used to add retries / reliability in the workflow.&lt;/p&gt;

&lt;h5&gt;
  
  
  Possible Validations:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;De-Duplication Check: If you only need to send out an SMS one time in a given day or through the duration of the whole campaign, the Lambda function here can query the Task Table to check if an SMS has already been sent.&lt;/li&gt;
&lt;li&gt;Data Integrity: If the Campaign Data Source has invalid numbers - think 999-999-9999 or 000-000-0000 these kinds of entries can be filtered out from the campaign.&lt;/li&gt;
&lt;li&gt;Active Hours: If the campaign is only supposed to run at certain hours of the day, a time check could be added to see if it's an appropriate time to send out an SMS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Possible Transformations:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;The Pinpoint API requires numbers to be specified in the following format: (Country Code Phone Number: +11110001111) You can transform the input phone number values to match that format. You may find yourself adding a country code or taking out parentheses and dashes. Regex is your friend.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Task Database
&lt;/h3&gt;

&lt;p&gt;Why insert tasks into a database? Can't we just do everything in the Validation Lambda function?&lt;/p&gt;

&lt;p&gt;Here's what we can do with the tasks stored in a database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can set up de-duplication logic in the validation step to ensure we're not sending multiple SMS to a single customer when we're not supposed to.&lt;/li&gt;
&lt;li&gt;We have a record of how many SMS were entered into our campaign, sent an SMS, and successfully received an SMS. We can use these metrics for some light data analysis.&lt;/li&gt;
&lt;li&gt;If Pinpoint has an outage, we'll be able to re-trigger the SMS sending once it comes back online.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've been using DynamoDB as my Task Database as it pairs very nicely with Lambda functions. Enabling DynamoDB Streams will the contents of new items to be streamed to the Send SMS Lambda function.&lt;/p&gt;

&lt;p&gt;A few other DynamoDB features that I've found very useful are time to live, on-demand scaling, and Cloudwatch metrics for Read and Write Consumed Units.&lt;/p&gt;

&lt;h5&gt;
  
  
  Possible Partition Keys:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;If you need to send out 1 SMS per day to a number, you could have the phone number as primary key and the date as sort key.&lt;/li&gt;
&lt;li&gt;If you need to send out multiple SMS to the same number and each SMS is independent of the others, you could have the phone number as primary key and a UUID as sort key.&lt;/li&gt;
&lt;li&gt;If you need to send out 1 SMS to a number for duration of the whole campaign, you could have the phone number as primary key without any sort key.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Possible Fields:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Phone Number&lt;/li&gt;
&lt;li&gt;Date&lt;/li&gt;
&lt;li&gt;Delivery Result&lt;/li&gt;
&lt;li&gt;Time to Live (Automated Deletion)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Send SMS
&lt;/h3&gt;

&lt;p&gt;This step processes the contents of the DynamoDB Streams. As new items are inserted into the database, we'll send out a new SMS for each one. &lt;/p&gt;

&lt;p&gt;Tips for your Send SMS Function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Watch your Function Timeout setting, if you're inserting large amounts of data at once, each DynamoDB stream event will be large, and it'll take time to send out each SMS. As a rule of thumb, it would take about 20 seconds to send out 50 SMS.&lt;/li&gt;
&lt;li&gt;Increase Function Memory to 512 MB or greater to reduce the duration of the invocation.&lt;/li&gt;
&lt;li&gt;Send out 1 SMS per Pinpoint API call. This will give you a unique message ID per SMS sent out.&lt;/li&gt;
&lt;li&gt;A "successful" Pinpoint API response status doesn't mean the customer received the SMS. You'll have to enable the Pinpoint Events stream and read Delivery Statuses from the events logged to an S3 bucket.
#### SMS Delivery Result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned in the previous step, we'll need to stream events from our Pinpoint application to an S3 bucket. It can take anywhere from 30 minutes to 8 hours to receive the SMS delivery result.&lt;/p&gt;

&lt;p&gt;I like to use Kinesis Firehose to stream events directly into the S3 bucket. If you have an analytics tool that you like to use, you ingest these files into a data warehouse to gain insights into your SMS campaign performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update Delivery Result
&lt;/h3&gt;

&lt;p&gt;Once we see that the SMS was successfully delivered or not delivered to the customer, we can update the entries in our task database and if applicable, the original Campaign Data Source to include the delivery result. Thanks to Lambda and SQS we can listen for new files inserted into the Delivery Result S3 bucket and update those databases as soon as we know about the delivery result. &lt;/p&gt;

&lt;p&gt;Tips for your Update Delivery Result Function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up an SQS Queue in between your S3 bucket and Lambda function for resiliency!&lt;/li&gt;
&lt;li&gt;Set up an SQS DLQ for that SQS queue in order to analyze failed records in production.&lt;/li&gt;
&lt;li&gt;Increase Timeout / Memory: Just like I mentioned earlier, this will be helpful for campaigns sending lots of SMS's.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading about my approach to SMS campaigns. I'm planning on posting approaches for other kinds of SMS campaigns in the coming days - stay tuned. Happy to answer any questions in the comments about the architecture above or Pinpoint.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>dynamodb</category>
      <category>pinpoint</category>
    </item>
    <item>
      <title>Studying for the AWS Certified Solutions Architect - Associate Exam</title>
      <dc:creator>Rohan</dc:creator>
      <pubDate>Thu, 06 Feb 2020 15:20:47 +0000</pubDate>
      <link>https://dev.to/rohanmehta_dev/studying-for-the-aws-certified-solutions-architect-associate-exam-4h42</link>
      <guid>https://dev.to/rohanmehta_dev/studying-for-the-aws-certified-solutions-architect-associate-exam-4h42</guid>
      <description>&lt;p&gt;Yesterday I passed the Certified Solutions Architect Associate level exam. &lt;br&gt;
Here is a guide based on my experience studying for the exam.&lt;/p&gt;

&lt;p&gt;The majority of the questions were on S3, RDS, EC2, and VPC as you'd expect.&lt;br&gt;
I also had a number of questions on DynamoDB, Cloudfront, Auto Scaling, EBS, and ELB.&lt;/p&gt;

&lt;p&gt;Guide to services covered in the exam. Nice concise view of all the links to the various AWS resources (White Papers and FAQ’s)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tutorialsdojo.com/aws-certified-solutions-architect-associate/"&gt;https://tutorialsdojo.com/aws-certified-solutions-architect-associate/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;There are a few video lecture series on Udemy that can be bought for $10-15 on sale.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.udemy.com/course/linux-academy-aws-certified-solutions-architect-associate/"&gt;https://www.udemy.com/course/linux-academy-aws-certified-solutions-architect-associate/&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c01/"&gt;https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c01/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ACloudGuru courses are popular but aren't in depth enough for the exam in my opinion.&lt;/p&gt;

&lt;p&gt;A condensed version of the AWS FAQ section for each of the services on the exam. Very useful for reviewing the services after watching lecture videos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tutorialsdojo.com/aws-cheat-sheets/"&gt;https://tutorialsdojo.com/aws-cheat-sheets/&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Practice Exams:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams/"&gt;https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This practice exam set is very representative of the concepts covered in the actual exam. I even had a few questions that were identical to questions in this practice set.&lt;/p&gt;

&lt;p&gt;The first time around I took these exams scored in the 70’s. I had covered the ACloudGuru videos and some FAQ’s etc, but didn’t know where to focus my studying. These exams helped a lot in understanding what kinds of questions are asked. My process was to review the detailed answers provided at the end of each exam, continue studying the materials, and take the test again after 3 weeks. You should be ready for the final exam if you are able to score 85-90% in the second attempt.&lt;br&gt;
From what I observed, exams from other vendors such as ACloudGuru are designed to be harder than the actual exam whereas based on my experience these questions were more representative of the actual exam. &lt;/p&gt;

&lt;p&gt;Note: For all these Udemy links, wait until you see them on sale for 80-90% off, the sales happen every other week. Don't spend $200 on a single course.&lt;/p&gt;

&lt;p&gt;Please feel free to ask me any questions, I am happy to help!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Serverless Spotify Playlist Updates</title>
      <dc:creator>Rohan</dc:creator>
      <pubDate>Tue, 12 Nov 2019 16:16:37 +0000</pubDate>
      <link>https://dev.to/rohanmehta_dev/serverless-spotify-playlist-updates-4oln</link>
      <guid>https://dev.to/rohanmehta_dev/serverless-spotify-playlist-updates-4oln</guid>
      <description>&lt;p&gt;Ever wonder if you could get notified when new songs are added to a Spotify playlist?&lt;/p&gt;

&lt;p&gt;I made a Twitter bot using node.js that tweets out new songs added to Spotify's &lt;a href="https://open.spotify.com/playlist/37i9dQZF1DWWBHeXOYZf74?si=7JIGBdDaQQm_DYosKZc3qg"&gt;POLLEN&lt;/a&gt; playlist.&lt;br&gt;
Link to Github Repo: &lt;a href="https://github.com/rohanmeh/SpotifyUpdates"&gt;Github Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools and Frameworks used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/serverless"&gt;Serverless Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html"&gt;AWS CloudWatch Events&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/dynamodb/"&gt;AWS DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/aws-sdk"&gt;aws-sdk npm package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/twitter"&gt;Twitter for Node.js npm package&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;App Workflow:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F87Yi264--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7i7akijmzwyvmxx7yze3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F87Yi264--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7i7akijmzwyvmxx7yze3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CloudWatch Events Rule is set to trigger the Lambda function once an hour.&lt;/li&gt;
&lt;li&gt;Lambda function retrieves all songs Spotify's POLLEN playlist.&lt;/li&gt;
&lt;li&gt;If a song is not currently in the DynamoDB table, add the song to the table.&lt;/li&gt;
&lt;li&gt;For each new song, post a tweet with the artist name, song title, and link to the song.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My Thoughts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why is this a good use case for AWS Lambda? Given that I'm only running the CloudWatch Events Rule once an hour, I don't need to manage a server and run a web app 24/7. It's easier and cheaper to use a Lambda function since the code will run only when invoked.&lt;/li&gt;
&lt;li&gt;I've used Lambda before but this was my first time using the Serverless Framework. I found it very useful to be able to invoke my function locally before deploying to AWS. &lt;/li&gt;
&lt;li&gt;Initially I planned to use the npm package Lowdb to store the songs within my Lambda file but I then read Lambda functions should be stateless and any persistent state information should be kept in a separate database. This was the first time I used DynamoDB in a personal project. I found it quite easy to use the aws-sdk npm package to upload data to my DynamoDB table.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I'd like to separate the functionality of my single Lambda function into multiple Lambda functions connected through &lt;a href="https://docs.aws.amazon.com/sns/latest/dg/welcome.html"&gt;AWS Simple Notification Service&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://serverless.com/plugins/serverless-mocha-plugin/"&gt;Serverless Mocha Plugin&lt;/a&gt; to create and run a test suite for my function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Useful Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.spotify.com/documentation/web-api/"&gt;Spotify Web API Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.twitter.com/en/docs/basics/getting-started"&gt;Twitter API Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.bitsrc.io/understanding-javascript-async-and-await-with-examples-a010b03926ea"&gt;Deeply Understanding JavaScript Async and Await with Examples&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please feel free to leave questions or comments below!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>api</category>
      <category>node</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
