<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rodrigo Eduardo Nehring</title>
    <description>The latest articles on DEV Community by Rodrigo Eduardo Nehring (@rodrigonehring).</description>
    <link>https://dev.to/rodrigonehring</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rodrigonehring"/>
    <language>en</language>
    <item>
      <title>AWS API Gateway and Lambda limitations workaround</title>
      <dc:creator>Rodrigo Eduardo Nehring</dc:creator>
      <pubDate>Mon, 18 Nov 2019 14:10:48 +0000</pubDate>
      <link>https://dev.to/rodrigonehring/aws-api-gateway-and-lambda-limitations-workaround-38j8</link>
      <guid>https://dev.to/rodrigonehring/aws-api-gateway-and-lambda-limitations-workaround-38j8</guid>
      <description>&lt;p&gt;When using AWS API Gateway and Lambda, you will probably reach a few limitations, in this article, I will explain how I created one workaround to overcome the 29 seconds of maximum API Gateway request and 6mb of maximum body payload size.&lt;/p&gt;

&lt;p&gt;As an example, on our administrative dashboard, some data will be collected from MySQL and Elastic Search, that can be very slow in some cases.&lt;/p&gt;

&lt;p&gt;On our front-end, an React app, the request can take up to 15 minutes (maximum lambda execution time), the usage will be simple as that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fKXK4gGz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1870/1%2A-2SkQb6ruQqt1KW5xvImYg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fKXK4gGz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1870/1%2A-2SkQb6ruQqt1KW5xvImYg.png" alt="usage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The api.js file will contain:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7H_SSBul--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/myL3P32.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7H_SSBul--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/myL3P32.png" alt="api example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The resume of back-end architecture:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XFwo3TN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/b8riqztmxh8d34kaesmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XFwo3TN3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/b8riqztmxh8d34kaesmc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, start de workflow sending a POST request, passing any necessary parameters in body. This request needs a “type” parameter to define what function will be executed in the lambda worker. That request return the jobId for future use.&lt;br&gt;
An item is created in the DynamoDB workers table, with jobId as hashkey and status as “pending”.&lt;/p&gt;

&lt;p&gt;When worker promise function is done, with error or not, that DynamoDB item will be updated with the new status and S3 key of the result, that can be a JSON, XLSX or any useful type.&lt;/p&gt;

&lt;p&gt;On front-end, using the received jobId, we fire another request, GET /api/worker/retrieve, passing jobId as query parameter. This request starts a Lambda function that verify if worker is done.&lt;/p&gt;

&lt;p&gt;Case status is done, get the URL of S3 that have been saved later, sign that url with 1 hour to expires, then return to front-end.&lt;/p&gt;

&lt;p&gt;Case status is pending, make a loop to verify every 100ms if status have changed, if elapsed time is more than 21 seconds, return a 303 redirect with the same URL, so front-end doesn’t need to fire the request again, browser will handle that for us.&lt;/p&gt;

&lt;p&gt;After all that, front-end will receive a URL to a S3 file, if type is JSON, I make a new request to it and parse, if not, just return the URL.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
