<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Serverless Inc.</title>
    <description>The latest articles on DEV Community by Serverless Inc. (@serverless_inc).</description>
    <link>https://dev.to/serverless_inc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/serverless_inc"/>
    <language>en</language>
    <item>
      <title>Serverless monitoring — the good, the bad and the ugly</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Fri, 03 Jun 2022 14:55:45 +0000</pubDate>
      <link>https://dev.to/serverless_inc/serverless-monitoring-the-good-the-bad-and-the-ugly-23g6</link>
      <guid>https://dev.to/serverless_inc/serverless-monitoring-the-good-the-bad-and-the-ugly-23g6</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/serverless-monitoring-the-good-the-bad-and-the-ugly/"&gt;Serverless&lt;/a&gt; on September 26th, 2017&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8ieAyI21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AoGt9Cb9MZrPw9N7N.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8ieAyI21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AoGt9Cb9MZrPw9N7N.gif" alt="" width="432" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not so long ago, a job requirement pushed me into the world of FaaS, and I was thrilled. I had dreams of abstraction — eliminating all that tedious work no developer likes doing. “We are not operations engineers!” I exclaimed proudly. “We should not need to dabble in the dark arts of the Linux Shell.”&lt;/p&gt;

&lt;p&gt;But little did I know how wrong I was. We humans are creatures of habit, and one of my habits as an AWS user is checking the AWS Console religiously. It was my central place to monitor everything I needed to know about my servers’ health.&lt;/p&gt;

&lt;p&gt;Now comes the difficult question: How does monitoring work when using AWS Lambda and Serverless?&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring 101
&lt;/h2&gt;

&lt;p&gt;All applications have metrics we, as developers, need to monitor. This is crucial: downtime and slow apps can create some pretty grumpy customers.&lt;/p&gt;

&lt;p&gt;Trust me, I get angry phone calls and rage mail every once in a while. So how can you avoid getting yelled at by customers? Track your errors and monitor your software!&lt;/p&gt;

&lt;p&gt;Implement a good notification system that lets you know when and where an error occurred. Make sure to have good and easy to view logs of all errors, warnings and other crucial data your application creates. Be responsible for the software you write. Because it is our legacy as developers. We have made an oath, to be creators of awesome stuff!&lt;/p&gt;

&lt;p&gt;But user experience is only one side of the performance metrics. The second crucial metric is the measure of computational resources. How much resources is the app consuming. If it is too much you need to scale down your servers, otherwise, if the app is capping all resources you may consider larger servers or more of them.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;**Note&lt;/em&gt;&lt;em&gt;: I recently came across an &lt;a href="https://hackernoon.com/node-js-monitoring-done-right-70418ecbbff9"&gt;awesome article&lt;/a&gt; on this topic by none other than the CTO of RisingStack, Peter Marton. He explained in detail how to do monitoring right. I urge you to take a peek, it will change your view on monitoring forever.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overhead?
&lt;/h2&gt;

&lt;p&gt;Excuse me…? Can I have some monitoring, please? But, without being a burden on my application.&lt;/p&gt;

&lt;p&gt;We’re lucky that, in 2017, this is a given. Monitoring software has become so advanced that in today’s world of programming the overhead is minimal. The sun was not shining so bright back in the day. Monitoring applications was followed by a known fact that it would impact your app’s performance significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does this translate to Serverless?
&lt;/h2&gt;

&lt;p&gt;The Serverless revolution has been gaining strength for the past few years. I see no reason for it to stop. The hype is real.&lt;/p&gt;

&lt;p&gt;Developers are starting to view the &lt;strong&gt;F&lt;/strong&gt;unction as a &lt;strong&gt;S&lt;/strong&gt; ervice architecture as a savior, something that makes it possible to scale applications automatically and serve only as many users as needed. The pay-as-you-go method cuts costs drastically and makes it possible for startups to create awesome software for a fraction of the cost.&lt;/p&gt;

&lt;p&gt;But, wait a minute. What else needs to be cut for that to become a possibility?&lt;/p&gt;

&lt;p&gt;A couple of things come to mind. The overview of your code performance and tracking errors are first. Silent failures as well. How do you monitor the performance of a server that is not a server? Schrödinger’s server? Okay, now my head hurts.&lt;/p&gt;

&lt;p&gt;This paradox needs a new perspective. Monitoring Serverless is a new beast in itself. Traditional methods will not work. A new mindset is in order.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server != functions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Instead of telling our functions to send along additional data with every invocation, why not just collect their residual data? This is a cool idea! It’s a known fact all AWS Lambda functions send their logs to AWS CloudWatch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless is unforgiving
&lt;/h2&gt;

&lt;p&gt;Unlike in traditional applications, you don’t have full overview of every part of your system. Not to mention how hard it is to test Serverless. You have to push code to AWS to see if it’s working or spend an eternity on setting up emulators on your local machine. The process is incredibly tedious. Not to start with adding third-party services to your app. It creates overhead and additional costs. Try attaching monitoring services to every single Lambda function. That’s never going to scale well!&lt;/p&gt;

&lt;p&gt;Let’s imagine a scenario of monitoring a simple function on AWS Lambda. The purpose is to test the function and check the verbosity of the logs on CloudWatch.&lt;/p&gt;

&lt;p&gt;After hitting the endpoint with Postman a couple of times I’m assured it works fine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K7dsZn79--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3840/0%2AhfQJruXEfZ1ZAkrt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K7dsZn79--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3840/0%2AhfQJruXEfZ1ZAkrt.png" alt="" width="880" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Opening up CloudWatch I can see the logs clearly. All the function invocations are listed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wtWQjIHn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3780/0%2ApLvsHuoTJe9h_Zns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wtWQjIHn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3780/0%2ApLvsHuoTJe9h_Zns.png" alt="" width="880" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The logs are extensive, the only issue is I can’t seem to make any sense of them. I can see the functions we’re invoked, but not much else. Error messages for failing functions are not verbose enough, so they often go unnoticed. I’m also having a hard time finding functions that timed out.&lt;/p&gt;

&lt;p&gt;I also tried logging through the command line. It shows possible errors a bit better, but still, not good enough to have peace of mind.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless logs -f my-function
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KTBf5_wn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3752/0%2ANwyIsPnvRhl5T-wL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KTBf5_wn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3752/0%2ANwyIsPnvRhl5T-wL.png" alt="" width="880" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not to mention the tiresome nature of having to push code to AWS every time you’d want to try out something new. Thankfully, all is not lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making my life less miserable
&lt;/h2&gt;

&lt;p&gt;What if I didn’t need to push code to AWS every time I wanted to test something? All heroes don’t wear capes. Like a knight in shining armor, &lt;a href="https://github.com/dherault/serverless-offline"&gt;Serverless Offline&lt;/a&gt; comes barging in to save the day! At least now I can test all my code locally before pushing it to AWS. That’s a relief.&lt;/p&gt;

&lt;p&gt;Setting it up is surprisingly easy. Installing one npm module and adding a few lines to the serverless service’s &lt;strong&gt;serverless.yml&lt;/strong&gt; and voila, API Gateway emulated locally to run Lambda functions.&lt;/p&gt;

&lt;p&gt;Switching to the directory where I created the sample function and service, I just ran the following command in a terminal:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install serverless-offline --save-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After installing serverless offline I just referenced it in the &lt;strong&gt;serverless.yml&lt;/strong&gt; configuration:&lt;/p&gt;

&lt;p&gt;Back in my terminal running Serverless offline is as easy as just typing:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless offline start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That’s it, a local development simulation of API Gateway and Lambda is up and running!&lt;/p&gt;

&lt;h2&gt;
  
  
  The logs are still bad though…
&lt;/h2&gt;

&lt;p&gt;I still can’t get over the fact how bland the logs are. Not to mention the lack of error reporting. I took me a good while to find failing functions in the logs. Imagine the nightmare of tracking them in a large scale production application. This issue is what bothers me to most. The lack of overview. It’s like swimming in the dark. I don’t have the slightest clue what’s down there.&lt;/p&gt;

&lt;p&gt;What did I do? I went hunting. There has to be something out there on the web that can help me out. I was looking for a way to simulate the monitoring and logging of a server. I thought maybe there’s a way to create a broader perspective over the whole serverless system. What I found blew me away, in a good way. A bunch of tools exist that parse and analyze logs from all functions in a system on the account level. Now that’s cool.&lt;/p&gt;

&lt;p&gt;I decided to try out &lt;a href="https://dashbird.io/"&gt;Dashbird&lt;/a&gt; because it’s &lt;a href="https://dashbird.io/pricing/"&gt;free&lt;/a&gt; and seems promising. They’re not asking for a credit card either, making it a “why not try it out” situation.&lt;/p&gt;

&lt;p&gt;They say it only takes 5 minutes to hook up with your AWS account and be ready to go, but hey. I’m a skeptic. I timed myself.&lt;/p&gt;

&lt;p&gt;The onboarding process was very straightforward. You just add a new policy and role on your AWS account, hook it to your Dashbird account and that’s it. They even have a great &lt;a href="https://dashbird.io/docs/get-started/quick-start/"&gt;getting started tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to know, the timer stopped at 4 minutes. I’m impressed.&lt;/p&gt;

&lt;p&gt;However, I’m much more impressed with Dashbird. I can finally see what’s going on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1r3TDLCo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2AQyGbyhcQtXJ-SiA2q6bbyQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1r3TDLCo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2AQyGbyhcQtXJ-SiA2q6bbyQ.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Errors are highlighted, and I can see the overall health of my system. I feel great all of a sudden. It also tracks the cost so I don’t blow the budget. Even function tailing in real-time is included. Now that’s just cool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U-EsXDd9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2ABpYTAJ_zKsUvFWDFJY1E5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U-EsXDd9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2ABpYTAJ_zKsUvFWDFJY1E5Q.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this watching my back I’d be comfortable with using Serverless for any large-scale application. The word relief comes to mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Whoa… This has been an emotional roller-coaster. Starting out as a skeptic about the ability to monitor and track large-scale Serverless apps, I’ve turned into a believer.&lt;/p&gt;

&lt;p&gt;It all boils down to the developer mindset. It takes a while to switch from the mental image of a server to FaaS. Serverless is an incredible piece of technology, and I can only see a bright future if we keep pushing the borders with awesome tools like Serverless Offline, Dashbird, CloudWatch, and many others.&lt;/p&gt;

&lt;p&gt;I urge you to check out the tools I used above, as they have been of great help to me.&lt;/p&gt;

&lt;p&gt;Hope you guys and girls enjoyed reading this as much as I enjoyed writing it. Until next time, be curious and have fun.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Do you think this tutorial will be of help to someone? Do not hesitate to share. If you liked it, let me know in the comments below.&lt;/em&gt; &lt;strong&gt;Tools:&lt;/strong&gt; &lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://hackernoon.com/node-js-monitoring-done-right-70418ecbbff9"&gt;https://hackernoon.com/node-js-monitoring-done-right-70418ecbbff9&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://blog.risingstack.com/monitoring-nodejs-applications-nodejs-at-scale/"&gt;https://blog.risingstack.com/monitoring-nodejs-applications-nodejs-at-scale/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Application_performance_management"&gt;https://en.wikipedia.org/wiki/Application_performance_management&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://medium.com/dashbird/is-your-serverless-as-good-as-you-think-it-is-2baa3d36b1de"&gt;https://medium.com/dashbird/is-your-serverless-as-good-as-you-think-it-is-2baa3d36b1de&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=Serverless%20monitoring%20-%20the%20good%2C%20the%20bad%20and%20the%20ugly"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/serverless-monitoring-the-good-the-bad-and-the-ugly/"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>monitoring</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>How To Manage Your Alexa Skills With Serverless</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Thu, 02 Jun 2022 22:42:02 +0000</pubDate>
      <link>https://dev.to/serverless_inc/how-to-manage-your-alexa-skills-with-serverless-463</link>
      <guid>https://dev.to/serverless_inc/how-to-manage-your-alexa-skills-with-serverless-463</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/how-to-manage-your-alexa-skills-with-serverless/"&gt;Serverless&lt;/a&gt; on January 8th, 2018&lt;/p&gt;

&lt;p&gt;Masashi here, creator of the Serverless Alexa plug-in.&lt;/p&gt;

&lt;p&gt;Serverless and IoT go hand in hand, and it’s easy to use the &lt;a href="https://serverless.com/framework/"&gt;Serverless Framework&lt;/a&gt; to develop AWS Lambda functions for Alexa Skills.&lt;/p&gt;

&lt;p&gt;Unfortunately, you can’t control Alexa Skills with the Framework, which was a bummer to me because I found the Alexa Skills Kit webapp and &lt;a href="https://www.npmjs.com/package/ask-cli"&gt;ask-cli&lt;/a&gt; didn’t have the simplicity I’d come to love with the Serverless Framework.&lt;/p&gt;

&lt;p&gt;But! Luckily, the Serverless Framework has a great plugin system. I decided to solve this little problem with the power of the community!&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/marcy-terui/serverless-alexa-skills"&gt;Serverless Alexa Skills Plugin&lt;/a&gt; lets you integrate Alexa Skills into the Serverless Framework. We can now control the manifest and interaction model of Alexa Skills using sls command and serverless.yml!&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;The plugin is hosted by npm:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm install -g serverless
$ sls plugin install -n serverless-alexa-skills
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Get your credentials
&lt;/h2&gt;

&lt;p&gt;Login with Amazon is an OAuth2.0 single sign-on (SSO) system using your Amazon.com account.&lt;/p&gt;

&lt;p&gt;To get your credentials, log in to the &lt;a href="https://developer.amazon.com/"&gt;Amazon Developer Console&lt;/a&gt;, go to Login with Amazon from APPS &amp;amp; SERVICES, and then Create a New Security Profile:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8WKtWNN2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2AfH3Dqq2HqVJxmKub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8WKtWNN2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2AfH3Dqq2HqVJxmKub.png" alt="" width="880" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For following columns, you can enter whatever you like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LbCN58DR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2AoRZP_-rBS3Ujre9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LbCN58DR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2AoRZP_-rBS3Ujre9g.png" alt="" width="880" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the Web Settings of the new security profile:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SRwylntB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2A4-HhybyQInMoNvkA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRwylntB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2A4-HhybyQInMoNvkA.png" alt="" width="880" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Allowed Origins can be empty. Enter &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt; in Allowed Return URLs. This port number can be changed with serverless.yml, so if you want to change this, please do so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rLii7e3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2A__ZNPP5hjGvVYpwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rLii7e3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2048/0%2A__ZNPP5hjGvVYpwb.png" alt="" width="880" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remember your Client ID and Client Secret of the new security profile, as well as Vendor ID. You can check your Vendor ID at &lt;a href="https://developer.amazon.com/mycid.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You only need to do this process once. You can continue to use the same credentials as long as you use the same account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The troublesome browser click-work is over!&lt;/strong&gt; 👏 Let’s move on to the sls command.&lt;/p&gt;
&lt;h2&gt;
  
  
  Put your credentials into the Framework
&lt;/h2&gt;

&lt;p&gt;Write the Client ID, Client Secret, and Vendor ID to serverless.yml. It is good to use environment variables as it is shown below.&lt;/p&gt;

&lt;p&gt;Change the port number with Allowed Return URLs of Login with Amazon, add localServerPort, and write the port number:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Then, execute the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sls alexa auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This command opens the login page of Amazon.com in your browser. You will be redirected to localhost:3000 after authenticating. If the authentication is successful, you'll see the message: "Thank you for using Serverless Alexa Skills Plugin!!".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;note:&lt;/strong&gt; The security token expires in 1 hour. Therefore, if an authentication error occurs, please re-execute the command. I’m planning to implement automatic token refreshing in the future.&lt;/p&gt;

&lt;p&gt;Let’s make a skill!&lt;/p&gt;

&lt;p&gt;To start, execute the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sls alexa create --name $YOUR_SKILL_NAME --locale $YOUR_SKILL_LOCALE --type $YOUR_SKILL_TYPE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;These are descriptions of the options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;name: Name of the skill&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;locale: Locale of the skill (en-US for English, ja-JP for Japanese and so on)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;type: Type of the skill (custom or smartHome or video)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A manifest is initially set for the skill. You can check the manifest with the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Copy [Skill ID] and [Skill Manifest] and paste it to serverless.yml as below.&lt;br&gt;&lt;/p&gt;

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Execute the following command to update the manifest after updating your serverless.yml (or you can use the --dryRun option to check the difference between the local setting and the remote setting without updating):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sls alexa update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You can see the format of the manifest &lt;a href="https://developer.amazon.com/docs/smapi/skill-manifest.html#sample-skill-manifests"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Building the interaction model
&lt;/h2&gt;

&lt;p&gt;The skill does not have an interaction model at first, so you’ll need to write an interaction model definition to serverless.yml.&lt;/p&gt;

&lt;p&gt;Like this!&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
You can see the format of the interaction model &lt;a href="https://developer.amazon.com/docs/custom-skills/custom-interaction-model-reference.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Execute the following command to build the model after updating your serverless.yml (and you can also use the --dryRun option with this command):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sls alexa build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Then, you can check the model like so:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
There are a few more steps needed in order to completely publish skills, so I’m planning to do further integrations with the Alexa Skills Kit in the future. It’s still pretty great to be able to integrate manifests and models, since we update those many times as we develop. All the better if we can manage them with the source code of our Lambda functions!&lt;/p&gt;

&lt;p&gt;Now, we can completely manage our Lambda Functions and Alexa Skills with Serverless Framework + Serverless Alexa Skills Plugin!&lt;/p&gt;

&lt;p&gt;If you have any comments or feedback, please create an &lt;a href="https://github.com/marcy-terui/serverless-alexa-skills/issues"&gt;issue&lt;/a&gt; or send a pull request. I always welcome them 🍻&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=How%20To%20Manage%20Your%20Alexa%20Skills%20With%20Serverless"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/how-to-manage-your-alexa-skills-with-serverless/"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>alexa</category>
      <category>aws</category>
      <category>serverless</category>
      <category>plugin</category>
    </item>
    <item>
      <title>CI/CD for monorepos</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Thu, 02 Jun 2022 15:59:40 +0000</pubDate>
      <link>https://dev.to/serverless_inc/cicd-for-monorepos-479a</link>
      <guid>https://dev.to/serverless_inc/cicd-for-monorepos-479a</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/cicd-for-monorepos"&gt;Serverless&lt;/a&gt; on March 6th, 2020&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NlwjN6oC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2A-iR8N5mLmyKks6JW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NlwjN6oC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2A-iR8N5mLmyKks6JW.png" alt="" width="880" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This article was updated to reflect changes in the new dashboard at app.serverless.com&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When building serverless applications as a collection of Serverless services, we need to decide whether we are going to push each service individually to our version control system, or bundle them all together as a single repo. This article is not going to go into the details about which is better or not, but all our posts so far seem to show examples of services all stored in individual repositories.&lt;/p&gt;

&lt;p&gt;What this article &lt;strong&gt;is&lt;/strong&gt; going to demonstrate, however, is that deploying services from within a single monorepo is easily doable within Serverless Framework Pro’s CI/CD solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting started
&lt;/h3&gt;

&lt;p&gt;You need to make sure that the services you are deploying are bundles together in a repo in separate subdirectories off the root of the repo. You can see a simple &lt;a href="https://github.com/garethmcc/monrepotest"&gt;example repo here&lt;/a&gt; to see how this is structured to make sure that yours matches as closely as possible. Really the biggest part to take note of is that all services are off the root of the repo as separate sub-directories and a f9older with shared code is also off that root. This just simplifies configuration later.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;**&lt;em&gt;Note: **Serverless Framework Pro has a generous free tier so you don’t need to worry about not having a paid account just to try this feature out for yourself.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you have a repo setup (or you’ve cloned the sample repo), make sure each service in each subdirectory have the same app and org settings to connect to the dashboard and that those changes are also pushed to the repo.&lt;/p&gt;

&lt;p&gt;The last step before we walk through setting up the monorepo deployment is to ensure that we have our connection to our AWS account all squared away, especially if you have a &lt;a href="https://app.serverless.com"&gt;brand new dashboard account&lt;/a&gt;. Here is a &lt;a href="https://www.youtube.com/watch?v=VUKDRoUdMek"&gt;2 minute video&lt;/a&gt; that shows you how to easily and quickly connect to AWS using Providers.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/VUKDRoUdMek"&gt;
&lt;/iframe&gt;
&lt;br&gt;
With that out of the way, let’s get cracking!&lt;/p&gt;

&lt;h3&gt;
  
  
  First deploy
&lt;/h3&gt;

&lt;p&gt;The best way to get started is to just deploy first and get all your services deployed to AWS and created within the dashboard. Here are the steps to follow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Make sure you have credentials for the CLI to communicate to your Serverless account by running &lt;em&gt;serverless login&lt;/em&gt; in the CLI and completing the login process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the app property you have added to your services’ serverless.yml files does not yet exist, click the &lt;em&gt;create app&lt;/em&gt; button and choose to &lt;em&gt;add an existing Serverless Framework project&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;During that process you can create or choose a new Provider&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the app is created, go back to your project on the CLI, make sure to &lt;em&gt;cd&lt;/em&gt; into the first service and run &lt;em&gt;serverless deploy — stage [stageyouwanthere]&lt;/em&gt;. The &lt;em&gt;— stage&lt;/em&gt; is optional since it will always default to a value of &lt;em&gt;dev&lt;/em&gt; unless you specify otherwise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat 4 by *cd*ing into each sub-directory and deploying each service into AWS.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even if these services are &lt;strong&gt;already&lt;/strong&gt; deployed, you can deploy again. As long as nothing but the &lt;strong&gt;org **and **app&lt;/strong&gt; properties have changed, all that this new deployment does is add these services to your dashboard account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting to GitHub or BitBucket
&lt;/h3&gt;

&lt;p&gt;Now that everything is added to the dashboard, lets click the menu option to the right of the one of our service names and choose settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x9_5GBvm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AEPKO4PG3_-YGPCun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x9_5GBvm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AEPKO4PG3_-YGPCun.png" alt="" width="689" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the settings menu, select CI/CD and you should see the CI/CD configuration open up. If you are doing this for the first time, you have probably not connected to GitHub or BitBucket before, so just click the connect option and follow the prompts.&lt;/p&gt;

&lt;p&gt;Once you have completed that process, you will need to choose the repository for your monorepo from the dropdown list, and since this is a monorepo, the CI/CD settings will also ask you to choose the right base directory for this specific service. Go ahead and do that!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x5Iw3VpQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AqBCjZNafCsXoT0bH.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x5Iw3VpQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AqBCjZNafCsXoT0bH.png" alt="" width="631" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up automated deployments
&lt;/h3&gt;

&lt;p&gt;Since we have the basic connection all set up now, let’s scroll down to the branch deploys` section. This is where you can now configure which branch in your repo deploys to which stage or environment. Most repos have a main branch (or master) and this is often selected as the prod stage. You can however add a develop branch that can deploy to the dev stage as in the image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BnD23fWw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Aub5-qbBI-8JgHt_H.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BnD23fWw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Aub5-qbBI-8JgHt_H.png" alt="" width="458" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just add any additional branch to stage mappings you want to have. This configuration will then trigger an automated deployment as soon as any code changes are made to the branch you configure; for example if a developer creates a PR to the develop branch, one that PR is merged it will automatically trigger a deployment of your service to the dev stage if configured like the image above.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;NOTE: You can use &lt;a href="https://www.serverless.com/framework/docs/guides/providers/"&gt;**Providers&lt;/a&gt;&lt;/em&gt;* to configure a different AWS connection for each stage and &lt;a href="https://www.serverless.com/framework/docs/guides/parameters/"&gt;**Parameters&lt;/a&gt;** to pass different configuration values for each stage at deployment time as well.*&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You will need to repeat the above process for each sub-directory within your monorepo.&lt;/p&gt;

&lt;h3&gt;
  
  
  More advanced configuration
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Selective deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default, if you have multiple services of a monorepo configured and you merge a change anywhere in that repo, all services will redeploy, just in case. There’s no way for the system to know if there are any dependencies between the services so it cannot assume that only that one service that had a change should be redeployed. In other words, if you had 6 services and you made a change to just one, 6 seperate redeployments will occur, just in case.&lt;/p&gt;

&lt;p&gt;However, you can configure it differently. If you open the CI/CD settings for one of the services and scroll down to expand the &lt;em&gt;build settings&lt;/em&gt; section, you should see numerous options to help you maximise the efficiency of your CI/CD pipeline for a monorepo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ddR55lq7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AiJsTIAp_q58sZgk9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ddR55lq7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AiJsTIAp_q58sZgk9.png" alt="" width="522" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, the &lt;em&gt;Only trigger builds on selected file changes&lt;/em&gt; option is not selected and means that this service will &lt;strong&gt;always&lt;/strong&gt; be redeployed with any change to the git repository, even if there were no changes to the code of this service. If you only want this service redeployed when its own code is edited, check the box and you should see something like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Vf0B07o9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A0F28ncpi4tBZurJ2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vf0B07o9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A0F28ncpi4tBZurJ2.png" alt="" width="444" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Automatically, the directory of the current service is selected. From this point on, &lt;em&gt;servicea&lt;/em&gt; will only be re-deployed when its own code is edited.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But what if you actually did have services that had dependencies on each other. So in the example with &lt;em&gt;servicea&lt;/em&gt;, we could in fact link it to &lt;em&gt;serviceb&lt;/em&gt; and configure it so that &lt;em&gt;servicea&lt;/em&gt; will always be re-deployed when &lt;em&gt;serviceb&lt;/em&gt; is also edited. Just by adding a reference to the correct service directory, I can ensure this will happen:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Ks2H5Rl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AEyILx5_7-Sy4ZpHV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Ks2H5Rl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AEyILx5_7-Sy4ZpHV.png" alt="" width="405" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I could of course do this for any number of services that &lt;em&gt;servicea&lt;/em&gt; depends on and vice versa. But what if I have some kind of shared folder that &lt;em&gt;servicea&lt;/em&gt; uses. Because we reference a directory structure in our configuration, you can point to any path in your monorepo to be watched for changes. In the example repo, we have a directory called shared that stores a number of classes and functions (or at least it could) that are re-used by multiple services. If I change anything in shared, multiple services need to redeploy.&lt;/p&gt;

&lt;p&gt;I can accomplish this just by adding the path to shared:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LN5vxqlq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AJN1UBpXkiD4_ysMF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LN5vxqlq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AJN1UBpXkiD4_ysMF.png" alt="" width="422" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the image above, &lt;em&gt;servicea&lt;/em&gt; will be deployed on merges to Github if changes are detected in directories &lt;em&gt;servicea&lt;/em&gt;, &lt;em&gt;serviceb&lt;/em&gt; or shared. And I can configure any service with any specific arrangement of dependencies I need, providing me a ton of great flexibility to deploy what I need under the right circumstances.&lt;/p&gt;

&lt;p&gt;Monorepo deployments are much simpler to manage using Serverless Framework Pro CI/CD. But if you do have any feedback for us or want to just share, please hop into our &lt;a href="https://serverless.com/slack"&gt;Slack channels&lt;/a&gt; or &lt;a href="https://forum.serverless.com"&gt;Forums&lt;/a&gt; and let us know. Or you can even &lt;a href="https://twitter.com/garethmcc"&gt;DM me on Twitter&lt;/a&gt; if you have any questions.‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=CI%2FCD%20for%20monorepos"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/cicd-for-monorepos"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>cicd</category>
      <category>monorepos</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Monitor and debug all serverless errors</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Wed, 01 Jun 2022 17:53:22 +0000</pubDate>
      <link>https://dev.to/serverless_inc/monitor-and-debug-all-serverless-errors-56nk</link>
      <guid>https://dev.to/serverless_inc/monitor-and-debug-all-serverless-errors-56nk</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/monitor-and-debug-all-serverless-errors"&gt;Serverless&lt;/a&gt; on September 12th, 2019&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PG6v9FfU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2A4GVh6HxdIeraeGAA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PG6v9FfU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2A4GVh6HxdIeraeGAA.png" alt="" width="880" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the most popular features of the Serverless Framework is its ability to provide monitoring with automatic instrumentation. By signing up for a free Serverless Framework account and deploying your service, it is automatically instrumented to capture all of the data needed to provide metrics, alerts, notifications, stacktraces and more.&lt;/p&gt;

&lt;p&gt;This is especially powerful when it comes to monitoring and debugging errors. When your code throws an error, then the Serverless Framework provides a few ways to monitor and debug those errors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You will get a &lt;a href="https://serverless.com/framework/docs/dashboard/monitoring/alerts#error-new-error-type-identified"&gt;New Error Type&lt;/a&gt; alert for your service instance, notifying you in Slack or Email that a new error was identified.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The stack trace is captured and in the Serverless Dashboard you can see the stack trace highlighting the exact line which threw the error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The invocations &amp;amp; errors chart will show you the number of times errors have occurred over a span of time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using the invocation explorer you can search and identify the individual invocations which got the error and dig into all the details.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No wonder this is such a popular feature.&lt;/p&gt;

&lt;p&gt;However, until today, capturing the errors only worked for cases where the error was not caught by the code and resulted in a fatal crash of the Lambda invocation. But of course we do not want our services to return 500 in such cases, so more often than not, the errors are caught and the Lambda function returns a nicer 4XX error.&lt;/p&gt;

&lt;p&gt;Today we are launching a new addition to the Serverless Framework to help capture errors even when they are caught by your code. So let’s look at this code first.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
In the example above, our lambda function handler throws an error; however, it is also caught. It calls the captureError function provided by the Serverless Framework SDK in the context object. The function was able to proceed and return a nice friendly error to the API while still capturing the error. The &lt;a href="http://slss.io/docs-capture-error"&gt;documentation&lt;/a&gt; provides more details on using the captureError method.

&lt;p&gt;Now that the error is captured by the Serverless Framework, you can use the powerful dashboard features to help monitor and debug these errors. Here are a few ways you can interact with these newly captured errors in the dashboard.&lt;/p&gt;

&lt;p&gt;When a new error is captured which hasn’t been captured before, you will get a &lt;a href="https://serverless.com/framework/docs/dashboard/monitoring/alerts#error-new-error-type-identified"&gt;New Error Type&lt;/a&gt; alert. You can also setup &lt;a href="https://serverless.com/framework/docs/dashboard/monitoring/notifications/"&gt;notifications&lt;/a&gt; to get notified in Slack or email, or custom SNS Topics or API endpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1VUZHbgg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2AvonO-R8BaAaktK5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1VUZHbgg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2AvonO-R8BaAaktK5y.png" alt="" width="880" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All errors, including fatal errors and captured errors, are available in the Invocation Explorer so you can filter for invocations containing errors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vELRNjsM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2408/0%2AwSsdfr5accWjpgnX.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vELRNjsM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2408/0%2AwSsdfr5accWjpgnX.png" alt="" width="880" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also in the invocation explorer you can dive into the details of the individual invocation to get the details about the error, including the stack trace which was captured by the captureError method in the code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sNale7yS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2Al59Hq_iHpkXaVLpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sNale7yS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2Al59Hq_iHpkXaVLpq.png" alt="" width="880" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, in the service instance overview page you can view invocation metrics and filter the results to identify the captured errors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H84h8swu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2784/0%2At0hXzMQLXriqTzIF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H84h8swu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2784/0%2At0hXzMQLXriqTzIF.png" alt="" width="880" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to improve monitoring and debugging for your Serverless Framework application, getting started with the automatic instrumentation is incredibly easy. &lt;a href="https://app.serverless.com/"&gt;Sign up in the dashboard&lt;/a&gt; and follow the instructions to start a new Serverless Framework project or incorporate the dashboard features into existing services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=Monitor%20and%20debug%20all%20serverless%20errors"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/monitor-and-debug-all-serverless-errors"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>bugs</category>
      <category>serverless</category>
      <category>errors</category>
    </item>
    <item>
      <title>Efficient APIs with GraphQL and Serverless</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Wed, 01 Jun 2022 15:03:43 +0000</pubDate>
      <link>https://dev.to/serverless_inc/efficient-apis-with-graphql-and-serverless-k73</link>
      <guid>https://dev.to/serverless_inc/efficient-apis-with-graphql-and-serverless-k73</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/efficient-apis-graphql-serverless/"&gt;Serverless&lt;/a&gt; on July 23rd, 2018&lt;/p&gt;

&lt;p&gt;GraphQL can be a tool for building enlightened APIs, but it can also be a source of mystery for developers accustomed to REST.&lt;/p&gt;

&lt;p&gt;In this post, I’ll talk about the motivations that might lead you to choose GraphQL, and how to serve a GraphQL API that will let you really take advantage of its benefits.&lt;/p&gt;

&lt;p&gt;Here’s what we’ll be covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.serverless.com/blog/efficient-apis-graphql-serverless/#rest-api-design"&gt;REST API design&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.serverless.com/blog/efficient-apis-graphql-serverless/#enter-graphql"&gt;A GraphQL approach&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.serverless.com/blog/efficient-apis-graphql-serverless/#making-your-graphql-endpoint-serverless"&gt;Making your GraphQL endpoint serverless&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  REST API design
&lt;/h2&gt;

&lt;p&gt;First, let’s talk about some situations that arise in REST APIs; this directly segues into when and why you would want to use GraphQL.&lt;/p&gt;

&lt;p&gt;Let’s say you have a REST resource to represent the products that your business offers:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /products/123
{
  "id": "123",
  "name": "Widget",
  "price": "$10.00"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And also a resource for orders by customers:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /orders/445566
{
  "id": "445566",
  "customerName": "John Q. Public"
  "deliveryAddress": "1234 Elm St.",
  "productId": "123",
  "quantity": 5
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The order refers to the product by its ID. If the client needs to display the product information in the context of the order, it makes two requests: one to get the order record, and one to get the details for the product specified on the order.&lt;/p&gt;

&lt;p&gt;You might try to improve this API in a couple ways. One would be to offer a way to retrieve the product details directly from the order number, so that you can make the two requests in parallel.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /orders/445566/product
{
  "id": "123",
  "name": "Widget",
  "price": "$10.00"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you consistently need all the product information with the order, you might just decide to include the product information in the order resource:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /orders/445566
{
  "id": "445566",
  "customerName": "John Q. Public"
  "deliveryAddress": "1234 Elm St.",
  "productId": "123",
  "productName": "Widget",
  "productPrice": "$10.00"
  "quantity": 5
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now you’ve solved the issue of having to make multiple requests, but you’ve polluted the order object with properties from another resource, which makes it harder to use and evolve. You can fix this by keeping all product information under a single property:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /orders/445566
{
  "id": "445566",
  "customerName": "John Q. Public"
  "deliveryAddress": "1234 Elm St.",
  "product": {
    "id": "123",
    "name": "Widget",
    "price": "$10.00"
  }
  "quantity": 5
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That looks pretty good! It has the benefit that any code that was written to work with the /products response will also work with "product" property from /orders.&lt;/p&gt;

&lt;p&gt;The drawback of including the product information on the order is how it affects the backend. This may require a more expensive multi-table query with SQL, or a second query under NoSQL, or else a denormalized table that records product information with the order. It also increases the size of the response body, which can become a real problem as the REST API gets more mature and the response includes more information for different purposes.&lt;/p&gt;

&lt;p&gt;A slightly inelegant solution is to allow the request to flag whether it wants the product information:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /orders/445566?omitProduct=true
{
  "id": "445566",
  "customerName": "John Q. Public"
  "deliveryAddress": "1234 Elm St.",
  "quantity": 5
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is not very clean, but it does allow the backend to avoid doing the extra work when product information isn’t necessary, at the cost of increased code complexity. If you repeat this design struggle many times over the lifetime of your API, you may end up with a lot of flags for different properties.&lt;/p&gt;

&lt;p&gt;If you reach this point in your API design, then congratulations! You have partially re-invented GraphQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter GraphQL
&lt;/h2&gt;

&lt;p&gt;Let’s see how a GraphQL API answers the same questions.&lt;/p&gt;

&lt;p&gt;You’d begin by creating types for products and orders. You don’t start the API by tying things together with foreign keys, as you did with REST. Instead, the order type contains a product field as we eventually decided on in the above example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Product {
  id: String!
  name: String!
  price: String!
}

type Order {
  id: String!
  customerName: String!
  deliveryAddress: String!
  product: Product!
  quantity: Int!
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You create query fields to get orders and products:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Query {
  product(id: String!): Product
  order(id: String!): Order
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If the client needs to get an order and the relevant product details, you only need a single query.&lt;/p&gt;

&lt;p&gt;Since the query contains an exact statement of all the properties that it expects, the service knows by design whether it needs to fetch product information. This allows you to write a backend that minimizes database and compute time.&lt;/p&gt;

&lt;p&gt;For example, suppose you want to know the customer name, delivery address, order quantity, product name, and product price:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  order(id: "445566") {
    customerName
    deliveryAddress
    quantity
    product {
      name
      price
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This query would return only the requested properties:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 “customerName”: “John Q. Public”,
 “deliveryAddress”: “1234 Elm St.”,
 “quantity”: 5,
 “product”: {
 “name”: “Widget”,
 “price”: “$10.00”
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Making your GraphQL endpoint serverless
&lt;/h2&gt;

&lt;p&gt;There are even deeper advantages to having a serverless GraphQL endpoint, which you can &lt;a href="https://serverless.com/blog/running-scalable-reliable-graphql-endpoint-with-serverless/"&gt;read more about here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The tl;dr is that when you use GraphQL, you are relying on only one HTTP endpoint; and when you have one HTTP endpoint to connect all your clients to your backend services, you want that endpoint to be performant, reliable, and auto-scaling.&lt;/p&gt;
&lt;h2&gt;
  
  
  Building a GraphQL endpoint with the Serverless Framework
&lt;/h2&gt;

&lt;p&gt;So, how do we build this with the &lt;a href="https://serverless.com/framework/"&gt;Serverless Framework&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;We’re going to target AWS Lambda with Node 8 in this example, and the code should be easily adaptable to other FaaS providers. You can download the code for this example &lt;a href="https://s3-us-west-2.amazonaws.com/assets.blog.serverless.com/graphql-blog.zip"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Using the GraphQL reference implementation in JS, we can easily create our GraphQL schema from the type declarations.&lt;/p&gt;

&lt;p&gt;First import the utilities we need from the graphql library:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {
  graphql,
  buildSchema
} = require('graphql')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now you can use GraphQL schema language to specify the schema:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const schema = buildSchema(`
  type Product {
    id: String!
    name: String!
    price: String!
  }

type Order {
    id: String!
    customerName: String!
    deliveryAddress: String!
    product: Product!
    quantity: Int!
  }

type Query {
    product(id: String!): Product
    order(id: String!): Order
  }
`)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we create resolvers so that queries can access our data. This is also where we make sure the resolver isn’t doing any more work than necessary. The graphql library is very flexible. Resolvers can exist for individual fields, and a resolver can either be a constant value, a function, a promise, or an asynchronous function. Functions have access to any arguments for the field via a single object parameter.&lt;/p&gt;

&lt;p&gt;We want the database record for the product information to be retrieved only when requested, so we make the resolver for that field a function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const database = require('./database')
const resolvers = {
  product: ({ id }) =&amp;gt; database.products.get(id),
  order: async ({ id }) =&amp;gt; {
    const order = await database.orders.get(id)
    if(!order) return null

return {
      ...order,
      product: () =&amp;gt; database.products.get(order.productId)
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The methods database.products.get() and database.orders.get() are both asynchronous functions, returning promises. The resolver for product simply calls through to the database. You do not need to worry about manually removing extraneous fields, since graphql-js does that for you.&lt;/p&gt;

&lt;p&gt;The resolver for order is more complex. It uses async/await syntax to fetch the order record before returning. This allows us to get the productId for use in the resolver for the product field. Since the resolver for the product field is a function, it won’t be invoked unless the product field is actually included in the query.&lt;/p&gt;

&lt;p&gt;All that remains is to create a handler for Lambda. Using the newer asynchronous syntax introduced by Node 8 for Lambda, this is very simple.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports.query = async (event) =&amp;gt; {
  const result = await graphql(schema, event.body, resolvers)
  return { statusCode: 200, body: JSON.stringify(result.data, null, 2) }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Since all of the set-up logic for the GraphQL schema is outside of the handler, this will only be executed when Lambda needs to spin up a new instance to serve requests. To enable us to query by POST request, we have to include the following in serverless.yml:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: my-api

provider:
  name: aws
  runtime: nodejs8.10

functions:
  hello:
    handler: handler.query
    events:
      - http:
          path: /
          method: POST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That’s it. After a quick sls deploy, we can curl our new GraphQL endpoint to test the query:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl &lt;a href="https://lsqgfkvs2i.execute-api.us-east-1.amazonaws.com/dev/"&gt;https://lsqgfkvs2i.execute-api.us-east-1.amazonaws.com/dev/&lt;/a&gt; -d '{&lt;br&gt;
  order(id: "778899") {&lt;br&gt;
    customerName&lt;br&gt;
    deliveryAddress&lt;br&gt;
    quantity&lt;br&gt;
    product {&lt;br&gt;
      name&lt;br&gt;
      price&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
'

&lt;p&gt;{&lt;br&gt;
  "order": {&lt;br&gt;
    "customerName": "Stacey L. Civic",&lt;br&gt;
    "deliveryAddress": "4321 Oak St.",&lt;br&gt;
    "quantity": 32,&lt;br&gt;
    "product": {&lt;br&gt;
      "name": "Gadget",&lt;br&gt;
      "price": "$8.50"&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Conclusion&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;You’ve now got a working GraphQL endpoint built with Serverless that scales automatically with increased traffic!&lt;/p&gt;

&lt;p&gt;In this example, we went with a single-Lambda approach. If you want infrastructural microservices, you can also use the flexibility of resolvers to have a primary Lambda that invokes other lambdas to resolve different query fields. If you want a more in-depth solution that uses GraphQL from top to bottom, you can use &lt;a href="https://www.apollographql.com/docs/graphql-tools/schema-stitching.html"&gt;schema stitching&lt;/a&gt; to combine multiple GraphQL APIs into one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Serverless + GraphQL resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://serverless.com/blog/make-serverless-graphql-api-using-lambda-dynamodb/"&gt;How to make a Serverless GraphQL API using Lambda and DynamoDB&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://serverless.com/blog/running-scalable-reliable-graphql-endpoint-with-serverless/"&gt;Running a scalable &amp;amp; reliable GraphQL endpoint with Serverless&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=Efficient%20APIs%20with%20GraphQL%20and%20Serverless"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/efficient-apis-graphql-serverless/"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>graphql</category>
      <category>serverless</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Challenges and patterns for building event-driven architectures</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Tue, 31 May 2022 22:09:41 +0000</pubDate>
      <link>https://dev.to/serverless_inc/challenges-and-patterns-for-building-event-driven-architectures-48ie</link>
      <guid>https://dev.to/serverless_inc/challenges-and-patterns-for-building-event-driven-architectures-48ie</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/stream-based-challenges-and-patterns"&gt;Serverless&lt;/a&gt; on July 19th, 2017&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges with the Event-Driven Architecture
&lt;/h2&gt;

&lt;p&gt;In my &lt;a href="https://serverless.com/blog/event-driven-architecture-dynamodb/"&gt;previous post&lt;/a&gt;, I talked about how you can use DynamoDB Streams to power an event-driven architecture. While this architecture has a number of benefits, it also has some “gotchas” to look out for. As you go down this road, you need to be aware of a few challenges with these patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Changing Event Schemas
&lt;/h3&gt;

&lt;p&gt;In our user creation example from the last post, we’ve been saving the user’s first name and last name together in a single fullname field. Perhaps our developers later decide they'd rather have those as two separate fields, firstname and lastname. They update the user creation function and deploy. Everything is fine -- for them. But downstream, everything is breaking. Look at the code for the Algolia indexing function -- it implicitly assumes that the incoming Item will have a fullname field. When it goes to grab that field on a new Item, it will get a KeyError exception.&lt;/p&gt;

&lt;p&gt;How do we handle these issues? There’s no real silver bullet, but there are a few ways to address this both from the producer and consumer sides. As a producer, focus on being a polite producer. Treat your event schemas just like you would treat your REST API responses. See if you can make your events backward-compatible, in the sense of not removing or redefining existing fields. In the example above, perhaps the new event would write firstname, lastname, and fullname. This could give your downstream consumers time to switch to the new event format. If this is impossible or infeasible, you could notify your downstream consumers. The AWS CLI has a command for &lt;a href="http://docs.aws.amazon.com/cli/latest/reference/lambda/list-event-source-mappings.html"&gt;listing event source mappings&lt;/a&gt;, which shows which Lambda functions are triggered by a given DynamoDB stream. If you're a producer that's changing your Item structure, give a heads up to the owners of consuming functions.&lt;/p&gt;

&lt;p&gt;As a consumer of streams, focus on being a resilient consumer. Consider the assumptions you’re making in your function and how you should respond if those assumptions aren’t satisfied. We’ll discuss different failure handling strategies below, but you shouldn’t just rely on producers to handle this for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to handle failure
&lt;/h3&gt;

&lt;p&gt;Failure handing is a second challenge to consider with Lambda functions that are triggered by DynamoDB streams. Before we talk about this, we should dig a little deeper into how Lambda functions are invoked by DynamoDB streams.&lt;/p&gt;

&lt;p&gt;When you create an event source mapping from a DynamoDB stream to a Lambda function, AWS has a process that occasionally polls the stream for new records. If there are new records, AWS will invoke your subscribed function with those records. The AWS process will keep track of your function’s position in the DynamoDB stream. If your Lambda function returns successfully, the process will retain that information and update your position in the stream when polling for new records. If your Lambda function does not return successfully, it will not update your position and will re-poll the stream from your previous position. There are a few key takeaways here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You receive records in batches. There can be as few as 1 or as many as 10000 records in a batch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You may not alter or delete a record in the stream. You may only react to the information in the record.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each subscriber maintains its own position in the stream. Thus, a slower subscriber may be reading older records than a faster subscriber.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The batch notion is worth highlighting separately. Your Lambda function can only fail or succeed on an entire batch of records, rather than on an individual message. If you raise a failure on a batch of records because of an issue with a single record, know that you will reprocess that same batch. Take care to implement your record handler in an idempotent way — you wouldn’t want to send a user multiple “Welcome!” emails due to the failure of a different user.&lt;/p&gt;

&lt;p&gt;With this background, let’s think about how we should address failure. Let’s start with something simple, like a Lambda function that posts data about new user signups to Slack. This isn’t a mission-critical operation, so you can afford to be more lax about errors. When processing a batch of records, you could wrap it in a simple try/catch block that catches any errors, logs them to Cloudwatch, and returns successfully. For the occasional error that happens, that user isn’t posted to Slack, but it’s not a big deal. This function will likely stay up to date with the most recent records in the DynamoDB stream because of this strategy.&lt;/p&gt;

&lt;p&gt;With our Algolia indexing function, we want to be less cavalier. If we’re failing to index users as they sign up or modify their details, our search index will be stale and provide a poor experience for users. There are two ways you can handle this. First, you can simply raise an exception and fail loudly if &lt;em&gt;any&lt;/em&gt; record in the batch fails. This will cause your Lambda to be invoked again with the same batch of records. If this is a transient error, such as a temporary blip in service from Algolia, this should be fixed on the next invocation of your Lambda and processing will continue as normal. It’s more complicated if this is &lt;em&gt;not&lt;/em&gt; a transient error, such as in the previous section where the event contract was changed. In that case, your Lambda will continue to be invoked with the same batch of messages. Each failure will indicate that your position in the DynamoDB stream should not be updated, and you will be stuck at that position until you either update your code to handle the failure case or the record is purged from the stream 24 hours after it was added.&lt;/p&gt;

&lt;p&gt;This hard error pattern can be a good one, particularly for critical applications where you don’t want to gloss over unexpected errors. You can set up a Cloudwatch Alarm to notify you if the number of errors for your function is too high over a given time period or if the Iterator Age of your DynamoDB stream is too high, indicating that you’re falling behind in processing. You can investigate the cause of the error, make the necessary fix, and redeploy your function to handle the new record schema and continue your indexing as usual.&lt;/p&gt;

&lt;p&gt;Between the “soft failure” mode of logging and moving on, and the “hard failure” mode of stopping everything on an error, I like a third option that allows us to retain the structured record in a programmatically-accessible way, while still continuing to process events. To do this, we create an SQS queue for storing failed messages. When an unexpected exception is raised, we capture the failure message and store it in the SQS queue along with the record. An example implementation looks like:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Our main handler function is very short and simple. Each record is passed through a handle_record function, which contains our actual business logic. If any unexpected exception is raised, the record and exception are passed to a handle_failed_record function, which is shown below:&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Pretty straightforward — it takes in the failed record and the exception and writes them in a message to an SQS queue.

&lt;p&gt;When using this pattern, it helps to think of operating on individual records, rather than a batch of records. All of your business logic is contained on handle_record, which operates on a single record. This is useful when reprocessing failed records, as you can reuse the same logic. Imagine the unexpected error in your function was due to a bug in your logic that only affected a subset of records. You can fix the logic and redeploy, but you still need to process the records that failed in the interim. Since your handle_record function operates on a single record, you can just read records from the queue and send them through that same entry point:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
This is a simple reprocessing script that you can run locally or invoke with a reprocessing Lambda. It reads messages from the SQS queue and parses out the record object, which is the same as the record input from a batch of records from the DynamoDB stream. This record is passed into the updated handle_record function and the queue message is deleted if the operation is successful. This pattern isn't perfect, but I've found it to be a nice compromise between the two extremes of the failure spectrum when processing streams with Lambda.

&lt;h3&gt;
  
  
  Concurrency Limits
&lt;/h3&gt;

&lt;p&gt;Finally, let’s talk about concurrency limits with DynamoDB streams. The big benefit of streams is the &lt;em&gt;independence&lt;/em&gt; of the consumers — the Algolia indexing operations are completely separate from the process that updates the marketing team’s CRM. The development team that manages the user search index doesn’t even need to know about the marketing team’s needs or existence, and vice versa.&lt;/p&gt;

&lt;p&gt;However, it’s not quite accurate to say that consumers are completely independent. DynamoDB streams are similar to &lt;a href="https://aws.amazon.com/kinesis/streams/"&gt;Kinesis streams&lt;/a&gt; under the hood. These streams throttle reads in two ways: throughput and read requests. For throughput, you may read 2 MB per second from a single shard. For read requests, Kinesis streams have a limit of 5 read requests per second on a single shard. For DynamoDB streams, these limits are even more strict — &lt;a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html#Streams.Processing"&gt;AWS recommends&lt;/a&gt; to have no more than 2 consumers reading from a DynamoDB stream shard. If you had more than 2 consumers, as in our example from Part I of this blog post, you’ll experience throttling.&lt;/p&gt;

&lt;p&gt;To me, the read request limits are a defect of the Kinesis and DynamoDB streams. If you are hitting throughput limits on your streams, you can increase the number of shards as the MB limit is on a per-shard basis. However, there’s no similar scaling mechanism if you want to increase the number of read requests. Every consumer needs to read from every shard, so increasing the number of shards does not help you scale out consumers. The entire notion of an immutable log like Kinesis or Kafka is to allow for multiple independent consumers (check out Jay Krep’s excellent book, &lt;a href="http://shop.oreilly.com/product/0636920034339.do"&gt;I Heart Logs&lt;/a&gt;, for a better understanding of immutable logs). With the current read request limits in Kinesis and DynamoDB streams, the number of consumers is severely constrained.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this post, we discussed some implementation details and some gotchas to watch out for when using stream-based Lambda invocations. Now it’s your turn — tell us what you build with event-driven architectures!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=Challenges%20and%20patterns%20for%20building%20event-driven%20architectures"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/stream-based-challenges-and-patterns"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>serverless</category>
      <category>eventdriven</category>
      <category>aws</category>
    </item>
    <item>
      <title>Serverless Aurora: What it means and why it’s the future of data</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Tue, 31 May 2022 21:40:20 +0000</pubDate>
      <link>https://dev.to/serverless_inc/serverless-aurora-what-it-means-and-why-its-the-future-of-data-4mlg</link>
      <guid>https://dev.to/serverless_inc/serverless-aurora-what-it-means-and-why-its-the-future-of-data-4mlg</guid>
      <description>&lt;p&gt;Originally posted &lt;a href="https://www.serverless.com/blog/serverless-aurora-future-of-data/"&gt;Serverless&lt;/a&gt; on Dec 4th, 2017&lt;/p&gt;

&lt;p&gt;AWS had their annual re:Invent conference last week (missed it? &lt;a href="https://serverless.com/blog/ultimate-list-serverless-announcements-reinvent/"&gt;Check out our full recap&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;AWS Lambda started the Serverless movement by releasing Lambda at re:Invent 2014. But the Lambda releases this year were run-of-the-mill incremental improvements- &lt;a href="https://serverless.com/blog/ultimate-list-serverless-announcements-reinvent/#3gb-memory"&gt;higher memory limits&lt;/a&gt;, &lt;a href="https://serverless.com/blog/ultimate-list-serverless-announcements-reinvent/#concurrency-controls"&gt;concurrency controls&lt;/a&gt;, and of course, &lt;a href="https://serverless.com/blog/ultimate-list-serverless-announcements-reinvent/#golang-support"&gt;Golang support (coming soon!)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;All this to say, there was nothing game-changing in the functions-as-a-service (FaaS) world itself.&lt;/p&gt;

&lt;p&gt;Well then. Does this mean that AWS is slowing down on serverless?&lt;/p&gt;

&lt;p&gt;Hardly.&lt;/p&gt;

&lt;p&gt;We saw AWS asserting that serverless is more than just functions:&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--5XSVqL8Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1512392881130115075/xJwO6pQu_normal.jpg" alt="TJ Holowaychuk 🇺🇦 profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        TJ Holowaychuk 🇺🇦
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="mentioned-user" href="https://dev.to/tjholowaychuk"&gt;@tjholowaychuk&lt;/a&gt;
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      serverless != functions, FaaS == functions, serverless == on-demand scaling and pricing characteristics (not limited to functions)
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      20:58 PM - 30 Aug 2017
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=902999008674594816" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=902999008674594816" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=902999008674594816" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;For a deeper explanation of this, check out Ben Kehoe’s excellent post on &lt;a href="https://read.acloud.guru/the-serverless-spectrum-147b02cb2292"&gt;The Serverless Spectrum&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In five years when we look back at re:Invent 2017, we won’t be talking about the different &lt;a href="https://serverless.com/blog/ultimate-list-serverless-announcements-reinvent/#aws-eks"&gt;managed&lt;/a&gt; &lt;a href="https://serverless.com/blog/ultimate-list-serverless-announcements-reinvent/#aws-fargate"&gt;container&lt;/a&gt; offerings. We’ll be talking about this:&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;
      &lt;div class="ltag__twitter-tweet__media"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_gVaKYwV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/DP0ILeRUEAA8zkM.jpg" alt="unknown tweet media content"&gt;
      &lt;/div&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--6PyKI3sl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1529156271957594112/xd8l53zi_normal.png" alt="AWS re:Invent profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        AWS re:Invent
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @awsreinvent
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      Announcing Aurora Serverless. All the capabilities of Aurora, but pay only by the second when your database is being used &lt;a href="https://twitter.com/hashtag/reInvent"&gt;#reInvent&lt;/a&gt; 
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      16:48 PM - 29 Nov 2017
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=935913292903604224" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=935913292903604224" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=935913292903604224" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;That’s right. &lt;strong&gt;Serverless Aurora.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why is Serverless Aurora so important? We first need to understand two things: the technology-driven changes in software architectures in the cloud era, and the current state of the data layer in serverless architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architectural Evolution
&lt;/h2&gt;

&lt;p&gt;Earlier this year, Adrian Cockcroft wrote a piece on the &lt;a href="https://read.acloud.guru/evolution-of-business-logic-from-monoliths-through-microservices-to-functions-ff464b95a44d"&gt;Evolution of Business Logic from Monoliths through Microservices, to Functions&lt;/a&gt; that blew my mind. It showed how changes in technology are driving changes in development patterns and processes. Adrian has had a front row seat for these changes over the years from his work at eBay, Netflix, and now AWS.&lt;/p&gt;

&lt;p&gt;A bunch of unrelated technologies combined to drive these changes. Faster networks and better serialization protocols enabled compute that was distributed rather than centralized. This enabled API-driven architecture patterns that used managed services from SaaS providers and broke monoliths into microservices.&lt;/p&gt;

&lt;p&gt;Chef, Puppet, EC2 and Docker and eventually Lambda combined to enable and promote ephemeral compute environments that reduced time to value and increased utilization. These tools were combined with the necessary process improvements from the DevOps movement to increase velocity. We’re seeing smaller teams deliver features faster with lower costs.&lt;/p&gt;

&lt;p&gt;These changes have been huge, but the data layer has been lagging. Adrian touched on database improvements, but they aren’t as mind-blowing. They have explicit tradeoffs of simple query patterns:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Compared to relational databases, NoSQL databases provide simple but extremely cost effective, highly available and scalable databases with very low latency.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The lagging data layer is particularly problematic in Serverless architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem of the Serverless Data Layer
&lt;/h2&gt;

&lt;p&gt;I &lt;a href="https://serverless.com/blog/serverless-conf-2017-nyc-recap/#data-layer-in-the-serverless-world"&gt;spoke on this problem&lt;/a&gt; at ServerlessConf NYC in October. In short, there are two approaches you can take with databases with serverless compute: &lt;em&gt;server-full *or *serverless&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server-full databases
&lt;/h2&gt;

&lt;p&gt;A server-full approach uses instance-based solutions such as MySQL, Postgres, or MongoDB. I classify them as instance-based when you can tell me how many instances you have running and what their hostnames are.&lt;/p&gt;

&lt;p&gt;I like Postgres + Mongo because of &lt;a href="https://db-engines.com/en/ranking"&gt;their popularity&lt;/a&gt;, which means data design patterns are well-known and language libraries are mature.&lt;/p&gt;

&lt;p&gt;However, these instance-based solutions were designed for a pre-serverless world with long-running compute instances. This leads to the following problems:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Connection Limits&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Postgres and MySQL have limits of the number of active connections (e.g. 100) you can have at any one time. This can cause problems if you get a spike in traffic which causes a large number of Lambda to fire.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Networking issues&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Your database instances will often have strict firewall rules about which IP addresses can access them. This can be problematic with ephemeral compute — adding custom network interfaces will add latency to your compute’s initialization.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Provisioning issues&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Serverless architectures fit well with defining Infrastructure as Code. This is harder with something like Postgres roles (users). These aren’t easily scriptable in your CloudFormation or Terraform, which spreads your configuration out across multiple tools.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scaling issues&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is one of the most important problems. Instance-based databases aren’t designed to scale up and down quickly. If you have variable traffic during the week, you’re likely paying for the database you need at peak rather than adjusting throughout the week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless databases
&lt;/h2&gt;

&lt;p&gt;In contrast to server-full, instance-based databases, there is a class of serverless databases. Serverless databases are different in that you’re usually paying for &lt;em&gt;throughput&lt;/em&gt; rather than a particular number and size of instances.&lt;/p&gt;

&lt;p&gt;There are a few options for serverless databases, including &lt;a href="https://firebase.google.com/"&gt;Firebase&lt;/a&gt; and &lt;a href="https://fauna.com/"&gt;FaunaDB&lt;/a&gt;. However, the most common of these databases is &lt;a href="https://aws.amazon.com/dynamodb/"&gt;DynamoDB&lt;/a&gt; from AWS.&lt;/p&gt;

&lt;p&gt;DynamoDB addresses most of the problems listed above with server-full databases. There are no connection limits, just the general throughput limits from your provisioned capacity. Further, DynamoDB is &lt;em&gt;mostly&lt;/em&gt; easy to scale up and down with &lt;a href="https://read.acloud.guru/why-amazon-dynamodb-isnt-for-everyone-and-how-to-decide-when-it-s-for-you-aefc52ea9476#5aa1"&gt;some caveats&lt;/a&gt;. Also, the networking and provisioning issues are mitigated as well. All access is over HTTP and authentication / authorization is done with &lt;a href="https://serverless.com/blog/abcs-of-iam-permissions/"&gt;IAM permissions&lt;/a&gt;. This makes it much easier to use in a world with ephemeral compute.&lt;/p&gt;

&lt;p&gt;However, DynamoDB isn’t perfect as a database. You should really read Forrest Brazeal’s excellent piece on &lt;a href="https://read.acloud.guru/why-amazon-dynamodb-isnt-for-everyone-and-how-to-decide-when-it-s-for-you-aefc52ea9476"&gt;Why Amazon DynamoDB isn’t for everyone&lt;/a&gt;. In particular, the query patterns can be very difficult to get correct. DynamoDB is essentially a key-value store, when means you need to configure your data design very closely to your expected query patterns.&lt;/p&gt;

&lt;p&gt;To me, the biggest problem is the loss of flexibility in moving from a relational database to DynamoDB. With a relational model, it’s usually easy to query the data in a new way for a new use case. There isn’t that same flexibility for DynamoDB.&lt;/p&gt;

&lt;p&gt;Developer agility is one of the key benefits of serverless architectures. Having to migrate and rewrite data is a major blocker to this agility.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of Data
&lt;/h2&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--M3AwglVk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/938429074543869952/p-Coz-7j_normal.jpg" alt="Ben Kehoe profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Ben Kehoe
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @ben11kehoe
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir1kO05j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      Transition to the &lt;a href="https://twitter.com/hashtag/cloud"&gt;#cloud&lt;/a&gt;: treat servers like cattle, not pets. Transition to &lt;a href="https://twitter.com/hashtag/serverless"&gt;#serverless&lt;/a&gt; cloud architecture: treat servers like roaches
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      11:13 AM - 25 Mar 2016
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=713322946891227136" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fFnoeFxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-reply-action-238fe0a37991706a6880ed13941c3efd6b371e4aefe288fe8e0db85250708bc4.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=713322946891227136" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6dcrOn8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-retweet-action-632c83532a4e7de573c5c08dbb090ee18b348b13e2793175fea914827bc42046.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/like?tweet_id=713322946891227136" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRQc9lOp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/twitter-like-action-1ea89f4b87c7d37465b0eb78d51fcb7fe6c03a089805d7ea014ba71365be5171.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;Ben Kehoe loves to hammer the point that to be truly serverless, your compute should not exist when it’s not handling data. This hyper-ephemeral compute requires a new type of database. Highly-scalable, automation-friendly, global, with a flexible data model to boot.&lt;/p&gt;

&lt;p&gt;Distributed databases are hard. The NoSQL movement, including the &lt;a href="http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf"&gt;Dynamo paper&lt;/a&gt; that describes the principles of DynamoDB and influenced its cousins (Apache Cassandra, Riak, etc.), was a first step in the database revolution.&lt;/p&gt;

&lt;p&gt;The second step is in motion now. AWS announced &lt;a href="https://aws.amazon.com/about-aws/whats-new/2017/11/sign-up-for-the-preview-of-amazon-aurora-multi-master/"&gt;multi-master Aurora&lt;/a&gt;, allowing for your Aurora instances to have masters that accept writes in different Availability Zones. Similarly, they announced &lt;a href="https://aws.amazon.com/dynamodb/global-tables/"&gt;DynamoDB Global Tables&lt;/a&gt; which syncs data from DynamoDB tables &lt;em&gt;across different regions&lt;/em&gt; (!). Writes in São Paulo will be replicated to your copies in Ohio, Dublin, and Tokyo, seamlessly. These manage the difficulty of multi-master global databases.&lt;/p&gt;

&lt;p&gt;The next step is Serverless Aurora, due sometime in 2018. It checks all the boxes for a serverless database:&lt;/p&gt;

&lt;p&gt;✔︎ Easy scaling.&lt;/p&gt;

&lt;p&gt;✔︎ Pay-per-use.&lt;/p&gt;

&lt;p&gt;✔︎ Accessible over HTTP.&lt;/p&gt;

&lt;p&gt;✔︎ Authentication &amp;amp; authorization over tightly-scoped IAM roles rather than database roles.&lt;/p&gt;

&lt;p&gt;✔︎ A flexible relational data model that most developers know.&lt;/p&gt;

&lt;p&gt;This is a big deal.&lt;/p&gt;

&lt;p&gt;We’ve seen the hints that Amazon recognizes the issues with existing relational solutions in the cloud-native paradigm. They’ve implemented &lt;a href="http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html"&gt;IAM authentication&lt;/a&gt; for MySQL and Aurora MySQL databases already. Further, the &lt;a href="https://media.amazonwebservices.com/blog/2017/aurora-design-considerations-paper.pdf"&gt;Aurora design paper&lt;/a&gt; notes how they have changed the relational database for a cloud-native world.&lt;/p&gt;

&lt;p&gt;I believe this is only the first step in Amazon’s plan to push the database further. With the rise of social networks and recommendation engines, graph databases have become more popular. Amazon’s new &lt;a href="https://aws.amazon.com/neptune/"&gt;Neptune graph database&lt;/a&gt; is an foray into another data area. Graph databases are &lt;a href="http://jimwebber.org/2011/02/on-sharding-graph-databases/"&gt;notoriously hard to shard&lt;/a&gt;, so it may be a while before we see a Serverless Neptune. I wouldn’t bet against it coming eventually.&lt;/p&gt;

&lt;p&gt;re:Invent is about the future, and that’s why it’s my favorite conference of the year. When we look back on re:Invent 2017, I have a feeling the data layer improvements will be the most important of all.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=Serverless%20Aurora%3A%20What%20it%20means%20and%20why%20it%27s%20the%20future%20of%C2%A0data"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/serverless-aurora-future-of-data/"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aurora</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Using TensorFlow and the Serverless Framework for deep learning and image recognition</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Tue, 31 May 2022 17:01:02 +0000</pubDate>
      <link>https://dev.to/serverless_inc/using-tensorflow-and-the-serverless-framework-for-deep-learning-and-image-recognition-3296</link>
      <guid>https://dev.to/serverless_inc/using-tensorflow-and-the-serverless-framework-for-deep-learning-and-image-recognition-3296</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/using-tensorflow-serverless-framework-deep-learning-image-recognition/"&gt;Serverless&lt;/a&gt; on July 24th, 2018&lt;/p&gt;

&lt;p&gt;Deep and machine learning is becoming essential for a lot of businesses, be it for internal projects or external ones.&lt;/p&gt;

&lt;p&gt;The data-driven approach allows companies to build analytics tools based on their data, without constructing complicated deterministic algorithms. Deep learning allows them to use more raw data than a machine learning approach, making it applicable to a larger number of use cases. Also, by using pre-trained neural networks, companies can start using state of the art applications like image captioning, segmentation and text analysis-without significant investment into data science team.&lt;/p&gt;

&lt;p&gt;But one of the main issues companies face with deep/machine learning is finding the right way to deploy these models.&lt;/p&gt;

&lt;p&gt;I wholeheartedly recommend a serverless approach. Why? Because serverless provides a cheap, scalable and reliable architecture for deep learning models.&lt;/p&gt;

&lt;p&gt;In this post, we’ll cover how to build your first deep learning API using the &lt;a href="https://serverless.com/framework/"&gt;Serverless Framework&lt;/a&gt;, TensorFlow, AWS Lambda and API Gateway.&lt;/p&gt;

&lt;p&gt;We will cover the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Using serverless for deep learning — standard ways of deploying deep learning applications, and how a serverless approach can be beneficial.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Hello world” code — a basic Lambda function with only 4 lines of code. There is no API here, we’ll start with the simplest possible example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code decomposition — looking through the configuration file, and the python code for handling the model, to understand how the whole example works.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;API example — get a working API for image recognition on top of our example.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to skip the background about what TensorFlow is and why you’d want to use serverless for machine learning, &lt;a href="https://www.serverless.com/blog/using-tensorflow-serverless-framework-deep-learning-image-recognition/#the-basic-4-line-example"&gt;the actual example starts here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Serverless + TensorFlow?
&lt;/h2&gt;

&lt;p&gt;First of all, let’s briefly cover what TensorFlow is: an open source library that allows developers to easily create, train and deploy neural networks. It’s currently the most popular framework for deep learning, and is adored by both novices and experts.&lt;/p&gt;

&lt;p&gt;Currently, the way to deploy pre-trained TensorFlow model is to use a cluster of instances.&lt;/p&gt;

&lt;p&gt;So to make deep learning API, we would need stack like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MFSVO6ms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AbmnhD2CK4o1Gqpdn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MFSVO6ms--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AbmnhD2CK4o1Gqpdn.gif" alt="" width="880" height="647"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main pain points in this infrastructure is that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;you have to manage the cluster — its size, type and logic for scaling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;you have to pay for unused server power&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;you have to manage the container logic — logging, handling of multiple requests, etc&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With AWS Lambda, we can make the stack significantly easier and use simpler architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--93umokxN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ak0eS2HmLg7bNo8fO.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--93umokxN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ak0eS2HmLg7bNo8fO.png" alt="" width="632" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The difference in both approaches
&lt;/h2&gt;

&lt;p&gt;First of all, a serverless serverless approach is very scalable. It can scale up to 10k concurrent requests without writing any additional logic. It’s perfect for handling random high loads, as it doesn’t take any additional time to scale.&lt;/p&gt;

&lt;p&gt;Second, you don’t have to pay for unused server time. Serverless architectures have pay-as-you-go model. Meaning, if you have 25k requests per month, you will only pay for 25k requests.&lt;/p&gt;

&lt;p&gt;And not only does it make pricing completely transparent, it’s just a lot cheaper. For the example TensorFlow model we’ll cover in this post, it costs 1$ for about 25k requests. A similar cluster would cost a &lt;em&gt;lot&lt;/em&gt; more, and you’d only achieve pricing parity once you hit 1M requests.&lt;/p&gt;

&lt;p&gt;Third, infrastructure itself becomes a lot easier. You don’t have to handle Docker containers, logic for multiple requests, or cluster orchestration.&lt;/p&gt;

&lt;p&gt;Bottom line: you don’t have to hire someone to do devops for this, as any backend developer can easily handle it.&lt;/p&gt;

&lt;p&gt;As we’ll see in a minute, deploying a serverless deep/machine learning infrastructure can be done with as little as 4 lines of code.&lt;/p&gt;

&lt;p&gt;That said, when &lt;em&gt;wouldn’t&lt;/em&gt; you go with a serverless approach? There are some limitations to be aware of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Lambda has &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/limits.html"&gt;strict limits&lt;/a&gt; in terms of processing time and used memory, you’ll want to make sure you won’t be hitting those&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As mentioned above, clusters are more cost effective after a certain number of requests. In cases where you don’t have peak loads and the number of requests is really high (I mean 10M per month high), a cluster will actually save you money.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lambda has a small, but certain, startup time. TensorFlow also has to download the model from S3 to start up. For the example in this post, a cold execution will take 4.5 seconds and a warm execution will take 3 seconds. It may not be critical for some applications, but if you are focused on real-time execution then a cluster will be more responsive.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The basic 4 line example
&lt;/h2&gt;

&lt;p&gt;Let’s get started with our serverless deep learning API!&lt;/p&gt;

&lt;p&gt;For this example, I’m using a pretty popular application of neural networks: image recognition. Our application will take an image as input, and return a description of the object in it.&lt;/p&gt;

&lt;p&gt;These kinds of applications are commonly used to filter visual content or classify stacks of images in certain groups. Our app will try to recognize this picture of a panda:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bY2Uetpc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A_4XOBCbGFjbiw5vk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bY2Uetpc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A_4XOBCbGFjbiw5vk.png" alt="" width="102" height="101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The model and example are also available &lt;a href="https://www.tensorflow.org/tutorials/images/image_recognition"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’ll use the following stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;API Gateway for managing requests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Lambda for processing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serverless framework for handling deployment and configuration&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  “Hello world” code
&lt;/h2&gt;

&lt;p&gt;To get started, you’ll need to &lt;a href="https://serverless.com/framework/docs/providers/aws/guide/installation/"&gt;have the Serverless Framework installed&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Create an empty folder and run following commands in the CLI:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
You’ll receive the following response:&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
As you can see, our application successfully recognized this picture of a panda (0.89 score).

&lt;p&gt;That’s it. You’ve just successfully deployed to AWS Lambda with TensorFlow, using the Inception-v3 model for image recognition!&lt;/p&gt;
&lt;h2&gt;
  
  
  Code decomposition — breaking down the model
&lt;/h2&gt;

&lt;p&gt;Let’s start with serverless YAML file. Nothing uncommon here-we’re using a pretty standard deployment method:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
If we will look into the index.py file itself, we will see that first we need to download the model (.pb file) to the AWS Lambda .tmp folder, and then load it via a standard TensorFlow import function.

&lt;p&gt;Here are the parts you have to keep in mind if you want to put your own model, with the links straight to the full code in GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ryfeus/lambda-packs/blob/master/Tensorflow/source/index.py#L141"&gt;**Model download from S3&lt;/a&gt;:**&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
&lt;a href="https://github.com/ryfeus/lambda-packs/blob/master/Tensorflow/source/index.py#L80"&gt;**Model import&lt;/a&gt;:**&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
&lt;a href="https://github.com/ryfeus/lambda-packs/blob/master/Tensorflow/source/index.py#L147"&gt;**Getting the image&lt;/a&gt;:**&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
&lt;a href="https://github.com/ryfeus/lambda-packs/blob/master/Tensorflow/source/index.py#L107"&gt;**Getting predictions from the model&lt;/a&gt;:**&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Now, let’s move on, and add an API to this!
&lt;h2&gt;
  
  
  API example
&lt;/h2&gt;

&lt;p&gt;The simplest way to add an API to the example is to modify the serverless YAML file:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Then, we redeploy the stack:&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
And receive the following response:&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
To test the link, we can just open it in the browser:

&lt;p&gt;&lt;a href="https://.execute-api.us-east-1.amazonaws.com/dev/handler"&gt;https://.execute-api.us-east-1.amazonaws.com/dev/handler&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or run curl:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
We will receive:&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We’ve created a TensorFlow endpoint on AWS Lambda via the &lt;a href="https://serverless.com/framework/"&gt;Serverless Framework&lt;/a&gt;. Setting everything up was extremely easy, and saved us a lot of time over the more traditional approach.&lt;/p&gt;

&lt;p&gt;By modifying the serverless YAML file, you can connect SQS and, say, create a deep learning pipeline, or even connect it to a chatbot via AWS Lex.&lt;/p&gt;

&lt;p&gt;As a hobby, I port a lot of libraries to make the serverless friendly. &lt;a href="https://github.com/ryfeus/lambda-packs"&gt;You can look at them here&lt;/a&gt;. They all have an MIT license, so feel free to modify and use them for your project.&lt;/p&gt;

&lt;p&gt;The libraries include the following examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Machine learning libraries (Scikit, LightGBM)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Computer vision libraries (Skimage, OpenCV, PIL)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OCR libraries (Tesseract)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NLP libraries (Spacy)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Web scraping libraries (Selenium, PhantomJS, lxml)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Load testing libraries (WRK, pyrestest)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m excited to see how others are using serverless to empower their development. Feel free to drop me a line in the comments, and happy developing!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=Using%20TensorFlow%20and%20the%20Serverless%20Framework%20for%20deep%20learning%20and%20image%20recognition"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/using-tensorflow-serverless-framework-deep-learning-image-recognition/"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tensorflow</category>
      <category>serverless</category>
      <category>framework</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>6 Things to Know Before Migrating An Existing Service to Serverless</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Tue, 31 May 2022 15:49:52 +0000</pubDate>
      <link>https://dev.to/serverless_inc/how-to-monitor-aws-account-activity-with-cloudtrail-cloudwatch-events-and-serverless-36e</link>
      <guid>https://dev.to/serverless_inc/how-to-monitor-aws-account-activity-with-cloudtrail-cloudwatch-events-and-serverless-36e</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/6-things-to-know-before-migrating-an-existing-service-to-serverless"&gt;Serverless&lt;/a&gt; on August 7th, 2017&lt;/p&gt;

&lt;p&gt;Last year, my company decided to make the plunge. We were going to go Serverless! Except…most of the resources about serverless architectures are about how to start from scratch, not how to migrate existing services over.&lt;/p&gt;

&lt;p&gt;We spent eight months figuring it out along the way and, for all of you serverless hopefuls, we made a cheat sheet. These are the steps that worked for us:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Identify the problems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Train the existing team&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a Proof of Concept to verify that the problem is solved&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimize the solution to take advantage of the cloud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate your Continuous Integration/Continuous Deployment pipeline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate your testing&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Identify the problems
&lt;/h2&gt;

&lt;p&gt;What problems will serverless solve that your current solution does not? We (for example) wanted serverless to help us (1) lower operational costs, and (2) give us an easier way to replace a bunch of legacy systems with a small team.&lt;/p&gt;

&lt;p&gt;At my company, we decided that since AWS Lambda can be used as a glue between different AWS managed services, it was the best option for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Train the team
&lt;/h2&gt;

&lt;p&gt;After you have defined the problems, it is easier to define (more or less) what set of technologies can help you. Your developers are the ones who already know your business logic and service requirements better than anyone else. So instead of heavily outsourcing, I’d recommend that if no one on your existing team is familiar with serverless, it is time to train them.&lt;/p&gt;

&lt;p&gt;We encouraged our developers to go to meetups, conferences and to spend time (even work hours) tinkering with new technologies. In addition to that, we did hire a Serverless consultant to show us how to think in an event-driven manner and make sure we were following best practices.&lt;/p&gt;

&lt;p&gt;In my opinion, one of the reasons our project was so successful was the weekly workshop meetings we held. They lasted 3–4 hours per week, and we used that time for discussing new serverless-related topics, solving issues and implementing solutions. We all got to learn a lot from each other, and workshop was very inspiring.&lt;/p&gt;

&lt;p&gt;After you have identified the key problems to solve, and after your team has a better understanding of the tools available, it’s time to create Proof of Concept. By now, your team should have an idea of what to do next-a “hypothesis”. A Proof of Concept will help the team validate their hypothesis.&lt;/p&gt;

&lt;p&gt;Remember: a Proof of Concept is &lt;em&gt;not&lt;/em&gt; production code. Use it to focus on solving your problem and get rid of it. Your Proof of Concept should go to the trash after validating your hypothesis.&lt;/p&gt;

&lt;p&gt;In our project, we developed five Proofs of Concept. Recall that our problems were: (1) to replace legacy systems and (2) reduce operational costs. Our hypotheses were as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To replace legacy systems:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Cognito will replace the existing authentication system&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DynamoDB will replace the existing Riak database as our NoSQL database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Lambda + S3 + Elastic Transcoding will replace our existing transcoding process&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Lambda + API Gateway + S3 will replace our existing image resizing and provide better caching mechanisms&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;To reduce high operational costs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By implementing these Proofs of Concept, my team ended up validating all of our hypotheses.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Optimize the solution to take advantage of the cloud
&lt;/h2&gt;

&lt;p&gt;This step is critically in line with the previous step. When thinking about your Proofs of Concept and new architecture, it’s important to take full advantage of the cloud. As in: don’t just grab your instances and decompose them into AWS Lambdas and API Gateways using a DynamoBD; try to think about how to take advantage of cloud-managed services, like queues and caches.&lt;/p&gt;

&lt;p&gt;Also remember that by migrating everything to Serverless, you are transforming the architecture of your system into an event-driven architecture. In an event-driven architecture, events move around your system and all the services are decoupled from each other. AWS has a lot of services that can be used to manage events communication, like queues and streams. S3 is a great place to store your events. DynamoDB Streams can be used to let other services know that there was a change in your DynamoDB database.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Automate your Continuous Integration/Continuous Deployment pipeline
&lt;/h2&gt;

&lt;p&gt;Since people can disagree on what exactly a ‘microservice’ is, I call a microservice a collection of AWS Lambdas and other resources related to some very specific domain, like authentication, application management or transcoding. Serverless architectures involve lots of resources being deployed into different environments in this way. When you have several moving parts, one way to make things simpler is to automate everything you can.&lt;/p&gt;

&lt;p&gt;On my team, we used Serverless Framework to organize our projects and to automate microservice deployment into different environments. We wanted to define all our infrastructure configuration as code. The Serverless Framework helped us to do that, as all the resources needed in the microservice can be defined using Cloud Formation notation in the Serverless YAML file.&lt;/p&gt;

&lt;p&gt;We used Jenkins Continuous Integration server to take care of running the Serverless Framework deployment in three different environments (development, staging, and production). For each environment, we used a different AWS account. We wanted to have three accounts so we could take advantage of managed resources soft limits as much as possible, and also to keep the different environments isolated.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Automate your testing
&lt;/h2&gt;

&lt;p&gt;Testing is undervalued by a lot of teams. In a Serverless and event-driven architecture, the complexity of the code moves to the architecture itself. Because of this, testing at all levels is critical for peace of mind. Test, test, test. Also, test.&lt;/p&gt;

&lt;p&gt;Unit tests will help you to make sure that your business logic works. For your managed services, you should write integration tests against them. It’s important to define clear interfaces between your services and the managed services so that you can have control of what is going in and out of your service.&lt;/p&gt;

&lt;p&gt;Don’t forget about End-to-end Testing. In an event-driven architecture, the services are decoupled from each other; as a result, it can be hard to know how the events are moving around the architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Migrating an existing service to Serverless was work, true-but honestly, it was also fun. The two most important things to have in mind when migrating one (or all) of your existing services to serverless are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;optimize your architecture to take advantage of the cloud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;remember that your new architecture is an event-driven one&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Best of luck, and see you on the other side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=6%20Things%20to%20Know%20Before%20Migrating%20An%20Existing%20Service%20to%20Serverless"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/6-things-to-know-before-migrating-an-existing-service-to-serverless"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>tutorial</category>
      <category>framework</category>
      <category>api</category>
    </item>
    <item>
      <title>The State of Serverless Multi-cloud</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Fri, 27 May 2022 17:29:17 +0000</pubDate>
      <link>https://dev.to/serverless_inc/the-state-of-serverless-multi-cloud-13lf</link>
      <guid>https://dev.to/serverless_inc/the-state-of-serverless-multi-cloud-13lf</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/state-of-serverless-multi-cloud"&gt;Serverless&lt;/a&gt; on November 6th, 2017&lt;/p&gt;

&lt;h2&gt;
  
  
  To multi-cloud, or not to multi-cloud
&lt;/h2&gt;

&lt;p&gt;Vendor lock-in runs deep in serverless applications. “Cloud provider” used to mean “whoever hosts your servers”. In a serverless paradigm, it means “whoever runs your functions”.&lt;/p&gt;

&lt;p&gt;And when the space doesn’t (yet) have standardization, developers must twirl those functions round and round in a whole vendor ecosystem of events and data storage. There’s no way to use Azure Functions and EC2 together.&lt;/p&gt;

&lt;p&gt;“But,” you say, “ &lt;a href="https://serverless.com/framework/"&gt;vendor-agnostic frameworks&lt;/a&gt; let you easily deploy functions across providers, at least.” That they do! But then, there’s the small technicality of language choice. Write your application in Python and you’ll have a hard time moving that over to Google Cloud Functions.&lt;/p&gt;

&lt;p&gt;Given all this, what do we make of the multi-cloud? Is it a pipe dream, or an attainable goal-and do we &lt;em&gt;actually&lt;/em&gt; need it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-cloud gives you wings
&lt;/h2&gt;

&lt;p&gt;The biggest advantage to serverless multi-cloud we hear in the field is feature arbitrage. Imagine plucking all your favorite aspects of each cloud provider and placing them nicely together in your very own, custom-made bouquet.&lt;/p&gt;

&lt;p&gt;It’s hard to commit to a single ecosystem, especially when serverless compute vendors are constantly adding new features that change the value equation. AWS Lambda is adding traffic shifting in Lambda aliases any day now; Microsoft Azure has their (still unique) Logic Apps, which lets you manage event-driven services much like you’re composing an IFTTT.&lt;/p&gt;

&lt;p&gt;Pricing works out differently across vendors for different services. The same project could work better elsewhere in less than 6 months because of all this rapid feature launching. We fear lock-in because it removes our flexibility of choice.&lt;/p&gt;

&lt;p&gt;And then, add failover into the mix.&lt;/p&gt;

&lt;p&gt;With serverless compute, you don’t have to worry about redundancy quite as much-Lambda, for instance, automatically scales across multiple availability zones for you. But entire regions can (and do) go down.&lt;/p&gt;

&lt;p&gt;While it’s a rare corner case, cloud outages can be devastating; we see larger companies caring more about this and moving to incorporate strategies for full cloud redundancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-cloud gives you pause
&lt;/h2&gt;

&lt;p&gt;As fun as dreamspace is, we do still have to wake up in the morning and ask ourselves: is a multi-cloud attainable and worth it?&lt;/p&gt;

&lt;p&gt;Let’s say you want to actually try and instrument full cloud failover. The first thing you’ll have to do is write everything in the only language all four major cloud providers support. Aka: JavaScript.&lt;/p&gt;

&lt;p&gt;Then, you’ll need to abandon your cloud databases for something like MySQL. You’ll need to constantly replicate that data from one cloud provider to another so that everything is up to date when failover occurs. You have to think, hard, about how each cloud handles logging? Secrets? Metrics?&lt;/p&gt;

&lt;p&gt;The rule of the game is to make everything as generic as possible-which seems to go against the serverless ethos, in a way, and prevents you from utilizing those powerful features you were trying to get with multi-cloud in the first place.&lt;/p&gt;

&lt;p&gt;It’s also worth mentioning that, for those who do choose to run an ecosystem across multiple providers, you’re paying for transfer. Not cheap.&lt;/p&gt;

&lt;p&gt;Maybe the answer ends up being: yes, it would be cool to leverage any service I want, whenever I want, and still maintain that serverless flexibility, but things just aren’t there yet. We don’t have a Schroedinger’s cake.&lt;/p&gt;

&lt;h2&gt;
  
  
  The multi-way forward
&lt;/h2&gt;

&lt;p&gt;There are a series of things that could happen to make multi-cloud easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-cloud service compatibility&lt;/strong&gt; Data management and storage, for instance, are ecosystem-dependent. Google has best machine learning right now; and while it’s &lt;em&gt;feasible&lt;/em&gt; to use GC services on different cloud providers, it isn’t necessarily simple.&lt;/p&gt;

&lt;p&gt;To make multi-cloud less work and less compromise, we need to be better ways to share data across cloud providers, and better ways react to any event source regardless of cloud provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add shims for polyglot language support&lt;/strong&gt; That way, it wouldn’t matter whether or not you wrote your functions in Go. Doing this yourself could be cumbersome, but soon there will probably be tools that facilitate this for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smartly route your data&lt;/strong&gt; This one’s on you, the spritely developer. Divide your application into two conceptual parts: ‘critical path’ and ‘specialized features that don’t need need to work 100% of the time’.&lt;/p&gt;

&lt;p&gt;Anything in the critical path (things that serve your site, for instance) should be written in a cloud-agnostic way. That makes it easier to implement failover or port things over to another provider, should you need. Specialized services (e.g., image tagging) can be maintained separately for a time, or be made to process important data later in case of an outage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The…multi-verse?
&lt;/h2&gt;

&lt;p&gt;Multi-cloud probably won’t ever be completely work-free, but we expect it’ll be easy enough-sooner more than later.&lt;/p&gt;

&lt;p&gt;And then we’ll start to see the landscape shift. Cloud providers won’t be fighting for bigger chunks of your server space; they’ll be fighting for bigger chunks of your application, in the form of features and services.&lt;/p&gt;

&lt;p&gt;This is frankly already happening. We stereotype giants like Microsoft and Amazon as being slow to innovate, yet they’ve been rushing to push feature after serverless feature for the past two years. They’re moving faster than most startups.&lt;/p&gt;

&lt;p&gt;As an industry, we’re headed for a user-centric software reality. Businesses will increasingly differentiate themselves with highly customized software, and multi-cloud is how they’ll do it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=The%20State%20of%20Serverless%20Multi-cloud"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/state-of-serverless-multi-cloud"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;](url)&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cloud</category>
      <category>news</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Deploying PyTorch Model as a Serverless Service</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Thu, 26 May 2022 17:28:28 +0000</pubDate>
      <link>https://dev.to/serverless_inc/deploying-pytorch-model-as-a-serverless-service-51ad</link>
      <guid>https://dev.to/serverless_inc/deploying-pytorch-model-as-a-serverless-service-51ad</guid>
      <description>&lt;p&gt;Posted at &lt;a href="https://www.serverless.com/blog/deploying-pytorch-model-as-a-serverless-service" rel="noopener noreferrer"&gt;Serverless&lt;/a&gt; by &lt;a href="https://twitter.com/anandsm46966924" rel="noopener noreferrer"&gt;Anand Menon&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AoGigzQOULqnfU_a0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AoGigzQOULqnfU_a0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Due to the latest advancements in the Deep Learning ecosystem like improved frameworks, production ready architectures, pre-trained models …etc. building a decent model is easy (not really 😅), but the biggest question that arises afterwards is “I have built a model, what’s next ?”&lt;/p&gt;

&lt;p&gt;A model is only as good as what use it can provide to the customers, so in order to make a model useful it should be served to millions of users in a very cost effective way. Now how do we serve or deploy a model to users? Easy, we could get on demand data, storage and computing power by leveraging any of the common cloud platforms like AWS, GCP, Azure …etc. For this tutorial we are going with AWS cloud platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best things in life come for free, but not AWS resources&lt;/strong&gt;. Cloud pricing even though very competitive can tend to stand as a hurdle for engineers to build out scalable and resource intensive products. &lt;strong&gt;Setting up dedicated instance infrastructure when building your MVP with AI capabilities is a suicide mission&lt;/strong&gt;, because we have no idea about user retention, product acceptance in the market, revenue generation from the product, etc. Building up this stack using dedicated cloud infrastructure from scratch is an expensive task due to several reasons, which we will be discussing soon*&lt;em&gt;.&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;So a typical deep learning API stack would look as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AK80NEWYify1Sm8Qk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AK80NEWYify1Sm8Qk.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;As we can see this is a very complex stack and the drawback of such infrastructure is that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We have to manage the cluster — its size, type and logic for scaling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Client has to pay for idle server power&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We have to manage the container logic — logging, handling of multiple requests, etc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Requires a lot of expertise in Cloud Architecture&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To solve the cons of a dedicated cloud infrastructure, cloud providers came up with serverless services (eg: AWS Lambda) whose main attractions are that, *&lt;em&gt;we don’t have to manage any servers and we are billed on the number of function execution rather than on an hourly basis *&lt;/em&gt;(1M free requests per month).&lt;/p&gt;

&lt;p&gt;Due to the latest advancement in the serverless ecosystem like container support, memory improvement, etc, this has opened up a lot of opportunities for all the Deep Learning practitioners to deploy models as an inference API using Lambda stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So today we will be deploying a PyTorch model as a Serverless API leveraging Lambda, ECR and Serverless framework.&lt;/strong&gt; &lt;strong&gt;&lt;em&gt;So if you want to jump right into code please check out my &lt;a href="https://github.com/anandsm7/BERT_as_serverless_service" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial, &lt;strong&gt;we will be deploying a simple text classification model using BERT🤗 which classifies daily user transaction logs to classes like ‘food’, ‘transport’, ‘bills’..etc and serves it as an API&lt;/strong&gt;. I will be covering topics in detail as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A brief explanation about all the resources being used&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Building our model inference pipeline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating a Lambda function using serverless framework&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Combining our inference pipeline with the lambda function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build a docker image and testing our API locally&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tagging and deploying images to AWS ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploying lambda functions using the image deployed in AWS ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, model inference using the serverless API&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda Service&lt;/strong&gt; — “With great power comes less responsibility”&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; is basically a service which lets you run functions on cloud servers without actually managing any servers. Managing servers is never an easy task as mentioned earlier. With serverless we don’t have to think about scalability and robustness of our infrastructure, since AWS takes care of it for us.&lt;br&gt;
To communicate with AWS resources like ECR, S3 ..etc programmatically we need to&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html" rel="noopener noreferrer"&gt; install AWS CLI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Serverless Framework&lt;br&gt;
&lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;**Serverless&lt;/a&gt; framework lets you quickly construct and deploy serverless applications using services like&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt; AWS Lambda,&lt;/a&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt; S3&lt;/a&gt;, &lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;Amazon API Gateway&lt;/a&gt; etc. This framework leverages&lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt; AWS CloudFormation&lt;/a&gt; to fire up all the resources required to build our inference API using a YAML configuration file.&lt;br&gt;
To install serverless framework please follow along the &lt;a href="https://www.serverless.com/framework/docs/providers/aws/guide/installation/" rel="noopener noreferrer"&gt;instructions&lt;/a&gt; and make sure to configure serverless with your AWS secret access keys following the&lt;a href="https://www.serverless.com/framework/docs/providers/aws/cli-reference/config-credentials/" rel="noopener noreferrer"&gt; guide&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**AWS ECR — **Docker 🐳 is all you need&lt;br&gt;
&lt;a href="https://aws.amazon.com/ecr/" rel="noopener noreferrer"&gt;Amazon Elastic Container Registry&lt;/a&gt; (ECR) is a fully managed container registry that makes it easy to store, manage, share, and deploy your container images and artifacts anywhere. So we basically build a docker image of our classifier pipeline and store it in AWS ECR.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2APkPeUY0oYQtfJdjH.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2APkPeUY0oYQtfJdjH.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2182%2F0%2AOl-gSbz4QlfzT-j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2182%2F0%2AOl-gSbz4QlfzT-j0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our complete API architecture is as shown above, here a user makes an API request with one of his daily transaction logs and this log gets passed through AWS API gateway, this request would then fire up the Lambda Function. For our initial request the lambda starts a 10GB pod and fetches the docker image from ECR to start our classifier container. The docker image comprises of a model and inference script (saving themodel in object storage is a better approach but for now we can go with this approach for simplicity). So based on the user query the lambda function performs model inference and returns a final transaction class as shown below:&lt;/p&gt;

&lt;p&gt;Since I have explained the whole process, now we can get our hands dirty with code . I won’t be explaining about the whole BERT classifier model training pipeline, cause that is not the purpose of this blog. You can checkout my &lt;a href="https://colab.research.google.com/drive/1IAJrx15szXsGDjKx1qihrvzAWqp2exz5?usp=sharing" rel="noopener noreferrer"&gt;**colab notebook&lt;/a&gt;**to train user log classification model . After the training process is complete you will get a **pytorch_model.bin **file which we will be using as our model for building our serverless API.&lt;/p&gt;

&lt;p&gt;Now we are going to create a python lambda function using serverless CLI command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless create --template aws-python3 --path serverless-logbert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The above command will create a simple boilerplate with a basic python handler script, serverless.yml, requirements.txt ..etc. Since we are building a Deep Learning text classification model using pytorch framework we need some packages that needs to be installed, so let’s add them to our requirements.txt. Since we are not leveraging GPU for inference we could go with a minimalist pytorch cpu version to save up storage.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-f [https://download.pytorch.org/whl/torch_stable.html](https://download.pytorch.org/whl/torch_stable.html)
torch==1.5.0+cpu
tqdm==4.60.0
sentencepiece==0.1.85
transformers==3.4.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s jump directly into our handler function, The Lambda function &lt;em&gt;handler&lt;/em&gt; is the method in your function code that processes events. When your function is invoked, Lambda runs the handler method. When the handler exits or returns a response, it becomes available to handle another event. Our handler code is as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
In the above code the sentence_prediction() method takes in the user input, preprocess, tokenize and pass to the trained BERT model, which in turn returns the final prediction. Currently the function returns the prediction class with highest confidence score. You can checkout the inference code &lt;a href="https://github.com/anandsm7/BERT_as_serverless_service/blob/main/inference.py" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are all set to test out our inference API locally using docker. Make sure docker is installed on your local machine to test the API, please checkout the &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;docker installation guide&lt;/a&gt;. Dockerfile is as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM public.ecr.aws/lambda/python:3.8

# copy function code and models into /var/task
COPY ./ ${LAMBDA_TASK_ROOT}/

# install our dependencies
RUN python3 -m pip install -r requirements.txt --target ${LAMBDA_TASK_ROOT}

# Set the CMD to your handler 
CMD [ "handler.predict"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now let’s build our docker image and run our container for testing&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t logbert-lambda .
docker run -p 8080:8080 logbert-lambda‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2044%2F0%2A7yHu-jpRwXdJqsMn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2044%2F0%2A7yHu-jpRwXdJqsMn.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are now ready to test out our API locally.&lt;/p&gt;

&lt;p&gt;The URL endpoint should be of the following format. {hostname}/{lambda-api-version}/functions/function/invocations&lt;/p&gt;

&lt;p&gt;If it’s working in docker then it should be working everywhere else,so most of our work is done. In order for the Lambda function to fetch this image it should be deployed to AWS ECR(Elastic container registry). As the first step we need to create a repo to save our docker image, this can be done programmatically using AWS CLI as follows:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository --repository-name logbert-lambda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In order to push our image, we first need to login to ECR from our machine and this requires some identifiers like AWS region and AWS account id which we can get from AWS IAM.&lt;/p&gt;

&lt;p&gt;We can now login to ECR using the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_region=ap-south-1
aws_account_id=&amp;lt;12 digit id&amp;gt;aws ecr get_login-password \
--region $aws_region \
| docker login \
--username AWS \
--password-stdin $aws_account_id.dkr.ecr.$aws_region.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Before pushing our image to ECR, we need to keep in mind that by default all docker images are pushed to Docker Hub, but here we need to push it to AWS ECR for lambda function to fetch our image. For that we need to tag or rename it to a format so that it will be pushed to its respective ECR repo. The format for this is as follows:&lt;/p&gt;

&lt;p&gt;{AccountID}.dkr.ecr.{region}.amazonaws.com/{repository-name}&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag logbert-lambda $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/logbert-lambda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Let’s check our docker image list using “ *docker image ls” *command, we will be able to see a docker image with the above format tag. Now we are all set to push our image to ECR.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/logbert-lambda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We have reached the final stage of our tutorial, that is to &lt;strong&gt;deploy AWS Lambda using our custom image&lt;/strong&gt;. Now we have to edit our serverless.yml file, which was created as a boilerplate file when we created our lambda function. The following yml file let’s you configure the AWS resources that needs to be fired up when deploying our lambda function.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
‍ECR makes our life super easy here, since we only need to pass the url path and digest path so that lambda could pull our locally tested image when starting up the service. &lt;strong&gt;We can get the URL path by either using AWS CLI or we can directly copy it from ECR console&lt;/strong&gt;, &lt;strong&gt;digest can be found inside the newly created repo&lt;/strong&gt;. Make sure to replace image PATH with our own respective URL path and digest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2APT-NMTLJtouUDYSn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2APT-NMTLJtouUDYSn.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are all set to deploy our lambda function using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command would fire up all the resources like AWS API gateway, lambda function, s3 bucket ..etc using AWS CloudFormation which are required for the API to function. Once the deployment process is completed we will get some logs as shown below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2450%2F0%2Ae-dfMvYRFFDxo46A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2450%2F0%2Ae-dfMvYRFFDxo46A.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are almost done &lt;a href="https://apps.timwhitlock.info/emoji/tables/unicode#emoji-modal" rel="noopener noreferrer"&gt;😁&lt;/a&gt; now let’s do the fun part. Yes, to test our newly built API. Let’s again go back to Postman and use the URL that we got from the above serverless deployment log and test it out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2194%2F0%2ACB98nCkRq1ck-3Jf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2194%2F0%2ACB98nCkRq1ck-3Jf.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yes!, it worked as expected and took only half a second to fetch the response, that too with CPU inference.&lt;/p&gt;

&lt;p&gt;This serverless API infrastructure has its fair share of pros and cons, **the biggest perk being that It will automatically scale up to thousands of parallel requests without any issues. **So we don’t have to worry about building a scalable and robust architecture on our own (Means no one is gonna call you out in the middle of the night to fix up server overloads 😴 🤯).&lt;/p&gt;

&lt;p&gt;At the same time it’s &lt;strong&gt;not very suitable for building production ready mission critical API due to cold start problem&lt;/strong&gt;, but this can rectified to some extend by using AWS CloudWatch to keep our lambda service warm. &lt;strong&gt;GPUs are currently not available for AWS lambda&lt;/strong&gt; which is a big disappointment 😞 for all Deep Learning folks, we can hope to see such features in the future iteration.&lt;/p&gt;

&lt;p&gt;The future looks bright for serverless infrastructure when it comes to building AI based MVP (Minimum Viable Products) in a very cost effective way.&lt;/p&gt;

&lt;p&gt;I hope you guys find this post useful. Always open for suggestions and criticisms. Thanks &lt;a href="https://apps.timwhitlock.info/emoji/tables/unicode#emoji-modal" rel="noopener noreferrer"&gt;😁&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=Deploying%20PyTorch%20Model%20as%20a%20Serverless%20Service"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2712%2F1%2AcoRpboSHyAtv5UPafeHzbw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/deploying-pytorch-model-as-a-serverless-service" rel="noopener noreferrer"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>pytorch</category>
      <category>serverless</category>
      <category>deploy</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to monitor AWS account activity with Cloudtrail, Cloudwatch Events and Serverless</title>
      <dc:creator>We're Serverless!</dc:creator>
      <pubDate>Thu, 26 May 2022 15:04:27 +0000</pubDate>
      <link>https://dev.to/serverless_inc/how-to-monitor-aws-account-activity-with-cloudtrail-cloudwatch-events-and-serverless-441j</link>
      <guid>https://dev.to/serverless_inc/how-to-monitor-aws-account-activity-with-cloudtrail-cloudwatch-events-and-serverless-441j</guid>
      <description>&lt;p&gt;Originally posted at &lt;a href="https://www.serverless.com/blog/serverless-cloudtrail-cloudwatch-events"&gt;Serverless&lt;/a&gt; on January 15th, 2018&lt;/p&gt;

&lt;p&gt;CloudTrail and CloudWatch Events are two powerful services from AWS that allow you to monitor and react to activity in your account-including changes in resources or attempted API calls.&lt;/p&gt;

&lt;p&gt;This can be useful for audit logging or real-time notifications of suspicious or undesirable activity.&lt;/p&gt;

&lt;p&gt;In this tutorial, we’ll set up two examples to work with CloudWatch Events and CloudTrail. The first will use standard CloudWatch Events to watch for changes in Parameter Store (SSM) and send notifications to a Slack channel. The second will use custom CloudWatch Events via CloudTrail to monitor for actions to create DynamoDB tables and send notifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up
&lt;/h2&gt;

&lt;p&gt;Before we begin, you’ll need the &lt;a href="https://serverless.com/framework/docs/providers/aws/guide/quick-start/"&gt;Serverless Framework installed&lt;/a&gt; with an AWS account set up.&lt;/p&gt;

&lt;p&gt;The examples below will be in Python, but the logic is pretty straightforward. You can rewrite in any language you prefer.&lt;/p&gt;

&lt;p&gt;If you want to trigger on custom events using CloudTrail, you’ll need to set up a CloudTrail. In the AWS console, navigate to the &lt;a href="https://console.aws.amazon.com/cloudtrail/home"&gt;CloudTrail service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Click “Create trail” and configure a trail for “write-only” management events:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y-6KhMSo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2514/0%2AezhA2NIXf3sCt1Mo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y-6KhMSo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2514/0%2AezhA2NIXf3sCt1Mo.png" alt="" width="880" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have your trail write to a Cloudwatch Logs log group so you can subscribe to notifications:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oBPjkAKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3948/0%2Az7mca-TsB990o04I.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oBPjkAKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3948/0%2Az7mca-TsB990o04I.png" alt="" width="880" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both examples above post notifications to Slack via the Incoming Webhook app. You’ll need to set up an Incoming Webhook app if you want this to work.&lt;/p&gt;

&lt;p&gt;First, create or navigate to the Slack channel where you want to post messages. Click “Add an app”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rWlko8Tj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3764/0%2AuFegsO7RgQ8lw9uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rWlko8Tj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3764/0%2AuFegsO7RgQ8lw9uj.png" alt="" width="880" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the app search page, search for “Incoming Webhook” and choose to add one. Make sure it’s the room you want.&lt;/p&gt;

&lt;p&gt;After you click “Add Incoming Webhooks Integration”, it will show your Webhook URL. This is what you will use in your serverless.yml files for the SLACK_URL variable.&lt;/p&gt;

&lt;p&gt;If you want to, you can customize the name and icon of your webhook to make the messages look nicer. Below, I’ve used the “rotating-light” emoji and named my webhook “AWS Alerts”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YeOjyEao--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4032/0%2AzrCCHGO72y9TEk05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YeOjyEao--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4032/0%2AzrCCHGO72y9TEk05.png" alt="" width="880" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that all set up, let’s build our first integration!&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Parameter Store Changes
&lt;/h2&gt;

&lt;p&gt;The first example we’ll do will post notifications of from AWS Parameter Store into our Slack channel. Big shout-out to &lt;a href="https://twitter.com/esh"&gt;Eric Hammond&lt;/a&gt; for inspiring this idea; he’s an AWS expert and a great follow on Twitter:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://twitter.com/hashtag/awswishlist?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;*#awswishlist&lt;/a&gt; Ability to trigger AWS Lambda function when an SSM Parameter Store value changes.*&lt;br&gt;
 &lt;em&gt;That could then run CloudFormation update for stacks that use the parameter&lt;/em&gt;&lt;br&gt;
 &lt;em&gt;- Eric Hammond (&lt;a class="mentioned-user" href="https://dev.to/esh"&gt;@esh&lt;/a&gt;) &lt;a href="https://twitter.com/esh/status/946824737585373184?ref_src=twsrc%5Etfw"&gt;December 29, 2017&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Parameter Store (also called SSM, for Simple Systems Manager) is a way to centrally store configuration, such as API keys, resource identifiers, or other config.&lt;/p&gt;

&lt;p&gt;(Check out our &lt;a href="https://serverless.com/blog/serverless-secrets-api-keys/"&gt;previous post&lt;/a&gt; on using Parameter Store in your Serverless applications.)&lt;/p&gt;

&lt;p&gt;SSM integrates directly with CloudWatch Events to expose certain events when they occur. You can see the full list of CloudWatch Events &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html"&gt;here&lt;/a&gt;. In this example, we are interested in the SSM Parameter Store Change event, which is fired whenever an SSM parameter is changed.&lt;/p&gt;

&lt;p&gt;CloudWatch Event subscriptions work by providing a filter pattern to match certain events. If the pattern matches, your subscription will send the matched event to your target.&lt;/p&gt;

&lt;p&gt;In this case, our target will be a Lambda function.&lt;/p&gt;

&lt;p&gt;Here’s an &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html#SSM-Parameter-Store-event-types"&gt;example SSM Parameter Store Event&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
We need to specify which elements of the Event are important to match for our subscription.

&lt;p&gt;There are two elements important here. First, we want the source to equal aws.ssm. Second, we want the detail-type to equal Parameter Store Change. This is narrow enough to exclude events we don't care about, while still capturing all of the events by not specifying filters on the other fields.&lt;/p&gt;

&lt;p&gt;The Serverless Framework makes it really easy to &lt;a href="https://serverless.com/framework/docs/providers/aws/events/cloudwatch-event/"&gt;subscribe to CloudWatch Events&lt;/a&gt;. For the function we want to trigger, we create a cloudWatchEvent event type with a mapping of our filter requirements.&lt;/p&gt;

&lt;p&gt;Here’s an example of our serverless.yml:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Notice that the functions block includes our filter from above. There are two other items to note:

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We injected our Slack webhook URL into our environment as SLACK_URL. Make sure you update this with your actual webhook URL if you're following along.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We added an &lt;a href="https://serverless.com/blog/abcs-of-iam-permissions/"&gt;IAM statement&lt;/a&gt; that gives us access to run the DescribeParameters command in SSM. This will let us enrich the changed parameter event by showing what version of the parameter we’re on and who changed it mostly recently. It &lt;em&gt;does not&lt;/em&gt; provide permissions to read the parameter value, so it’s safe to give access to parameters with sensitive keys.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our serverless.yml says that our function is defined in a handler.py module with a function name of parameter. Let's implement that now.&lt;/p&gt;

&lt;p&gt;Put this into your handler.py file:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
This function takes the incoming event and assembles it into a format &lt;a href="https://api.slack.com/docs/message-formatting"&gt;expected by Slack&lt;/a&gt; for its webhook. Then, it posts the message to Slack.

&lt;p&gt;Let’s deploy our service:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Then, let’s alter a parameter in SSM to trigger the event:&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; Make sure you’re running the put-parameter command in the same region that your service is deployed in.

&lt;p&gt;After a few minutes, you should get a notification in your Slack channel:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h3LEPwt9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ArNxA16cmmObNS4I_.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h3LEPwt9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ArNxA16cmmObNS4I_.png" alt="" width="664" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💥 Awesome!&lt;/p&gt;
&lt;h2&gt;
  
  
  Monitoring new DynamoDB tables with CloudTrail
&lt;/h2&gt;

&lt;p&gt;In the previous example, we subscribed to SSM Parameter Store events. These events are already provided directly by CloudWatch Events.&lt;/p&gt;

&lt;p&gt;However, not all AWS API events are provided by CloudWatch Events. To get access to a broader range of AWS events, we can use &lt;a href="https://aws.amazon.com/cloudtrail/"&gt;CloudTrail&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before you can use CloudTrail events in CloudWatch Event subscriptions, you’ll need to set up CloudTrail to write a CloudWatch log group. If you need help with this, it’s covered above in the &lt;a href="https://www.serverless.com/blog/serverless-cloudtrail-cloudwatch-events#setting-up"&gt;setting up&lt;/a&gt; section.&lt;/p&gt;

&lt;p&gt;Once you’re set up, you can see the &lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-supported-services.html"&gt;huge list of events&lt;/a&gt; supported by CloudTrail event history.&lt;/p&gt;

&lt;p&gt;Generally, an event will be supported if it meets both of the following requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It is a &lt;em&gt;state-changing&lt;/em&gt; event, rather than a read-only event. Think CreateTable or DeleteTable for DynamoDB, but not DescribeTable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is a &lt;em&gt;management-level&lt;/em&gt; event, rather than a data-level event. For S3, this means CreateBucket or PutBucketPolicy but not PutObject or DeleteObject.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can enable data-level events for S3 and Lambda in your CloudTrail configuration if desired. This will trigger many more events, so use carefully.&lt;/p&gt;

&lt;p&gt;When configuring a CloudWatch Events subscription for an AWS API call, your pattern will always look something like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
There will be a source key that will match the particular AWS service you're tracking. The detail-type will be AWS API Call via CloudTrail. Finally, there will be an array of eventName in the detail key that lists 1 or more event names you want to match.

&lt;p&gt;Pro-tip: Use the &lt;a href="https://console.aws.amazon.com/cloudwatch/home#rules:action=create"&gt;CloudWatch Rules console&lt;/a&gt; to help configure your items the first few times. You can point and click different options and it will show the subscription pattern:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eyLB-YyA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3096/0%2ALut68UBe4IqmoNkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eyLB-YyA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3096/0%2ALut68UBe4IqmoNkk.png" alt="" width="880" height="670"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s insert our DynamoDB CreateTable pattern into our serverless.yml:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Very similar to the previous example-we’re setting up our CloudWatch Event subscription and passing in our Slack webhook URL to be used by our function.

&lt;p&gt;Then, implement our function logic in handler.py:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Again, pretty similar to the last example-we’re taking the event, assembling it into a format for Slack messages, then posting to Slack.

&lt;p&gt;Let’s deploy this one:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
And then trigger an event by creating a DynamoDB table via the AWS CLI:&lt;br&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Wait a few moments, and you should get a notification in Slack:

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5gA_a4Py--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A57LT6Wt-eUuxkKtL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5gA_a4Py--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A57LT6Wt-eUuxkKtL.png" alt="" width="401" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aw yeah.&lt;/p&gt;

&lt;p&gt;You could implement some really cool functionality around this, including calculating and displaying the monthly price of the table based on the provisioned throughput, or making sure all infrastructure provisioning is handled through a particular IAM user (e.g. the credentials used with your CI/CD workflows).&lt;/p&gt;

&lt;p&gt;Also, make sure you delete the table so you don’t get charged for it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There’s a ton of potential for CloudWatch Events, from triggering notifications on suspicious events to performing maintenance work when a new resource is created.&lt;/p&gt;

&lt;p&gt;In a future post, I’d like to explore saving all of this CloudTrail events to S3 to allow for efficient querying on historical data-”Who spun up EC2 instance i-afkj49812jfk?” or “Who allowed 0.0.0.0/0 ingress in our database security group?”&lt;/p&gt;

&lt;p&gt;If you use this tutorial to do something cool, drop it in the comments!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.serverless.com/?view=register&amp;amp;utm_campaign=Console%20Signup&amp;amp;utm_source=dev.to&amp;amp;utm_medium=post&amp;amp;utm_content=How%20to%20monitor%20AWS%20account%20activity%20with%20Cloudtrail%2C%20Cloudwatch%20Events%20and%20Serverless"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uEkW3ymQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2712/1%2AcoRpboSHyAtv5UPafeHzbw.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.serverless.com/blog/serverless-cloudtrail-cloudwatch-events"&gt;https://www.serverless.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>serverless</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
