<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gareth McCumskey</title>
    <description>The latest articles on DEV Community by Gareth McCumskey (@garethmcc).</description>
    <link>https://dev.to/garethmcc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/garethmcc"/>
    <language>en</language>
    <item>
      <title>Why local development for serverless is an anti-pattern</title>
      <dc:creator>Gareth McCumskey</dc:creator>
      <pubDate>Wed, 02 Jun 2021 07:56:03 +0000</pubDate>
      <link>https://dev.to/garethmcc/why-local-development-for-serverless-is-an-anti-pattern-1d9b</link>
      <guid>https://dev.to/garethmcc/why-local-development-for-serverless-is-an-anti-pattern-1d9b</guid>
      <description>&lt;p&gt;In the serverless community, individuals and teams spend a lot of time and effort attempting to build an environment that is a replica of the cloud. Why? Because this is what we have always done. When you start your career building applications for the web, we were told you need to have a local development environment on your own machine and you do your work against that environment before pushing to your code repository. &lt;/p&gt;

&lt;p&gt;But I am going to argue that this absolute requirement to get up and running when building applications is not only unnecessary in the serverless world but actually harmful.&lt;/p&gt;

&lt;p&gt;Lets start considering the whys. Why do we create local development environments in the first place? What purpose do they actually serve?&lt;/p&gt;

&lt;p&gt;If you look back at where we have come from building for the web, we used to exist in a world where our code and scripts were exceedingly minimal and work was essentially done directly on the machines that served our application to the web. Why? Because these machines were often very specialised ones that were impossible to replicate without great expense and aiming for 100% uptime was not necessarily the biggest goal at that stage so why not? Its easy to just edit files directly on that remote machine.&lt;/p&gt;

&lt;p&gt;Push things a few years later down the line and we are now in a position where we need to make changes multiple times a day to an application that must not go down if we can avoid it. Editing directly on production becomes scary because we would like to test this application first if we could. &lt;/p&gt;

&lt;p&gt;Luckily, at this stage, a lot of the infrastructure for the web has gotten commoditised; we can use a regular consumer computer and install the same (or similar applications) to it to simulate the remote environment and test our application before pushing to the production server.&lt;/p&gt;

&lt;p&gt;However, things couldn't stay this way. Traffic increased, and single machines soon no longer became enough to handle the load the growth of the Internet created. Clusters of machines were needed with comparatively complex architectures to both increase request throughput and resiliency to failure as downtime became more and more costly. No longer was the replicated development environment on a developers machine a pretty-close replica. &lt;/p&gt;

&lt;p&gt;This is where a lot of the staging or development environments begin to come from. The thinking is, let developers develop on their local machines as they have done because that's what they are used to, and we will spin up as close to a replica of production we can in order to test against to make sure it wont break anything,even if its costly to the business, because that's better than down time.&lt;/p&gt;

&lt;p&gt;The cloud certainly helped a lot in this as well; if you can create staging environments on command and only put them up when needed, its not quite as expensive as keeping a development cluster in parallel in a server rack.&lt;/p&gt;

&lt;p&gt;However, the issue is that our local machines were, at best, only occasionally accurate to the production cluster, and usually required developers to be constantly pushing code to the shared staging server for testing purposes as the architectures were just too complex to ever hope to replicate locally and made any kind of local testing redundant. Not to mention, in teams, this resulted in a lot of stepping on toes and waiting for your turn to test your changes!&lt;/p&gt;

&lt;p&gt;What was really needed was a replica of production for every developer in the team. But with production clusters running multiple virtual machines, load balancers, relational databases, caches, etc, this is cost prohibitive.&lt;/p&gt;

&lt;p&gt;Then containers arrived. Finally! Now we can package up the complexity of our production systems into neat little blocks that don't interfere with each other and we can get closer to production by running them on our own development machines.&lt;/p&gt;

&lt;p&gt;Except, they do interfere with each other, and added huge amounts of complexity for developers to have to handle and worry about. Expensive engineers should be building features and generating revenue instead of managing their development environment and it STILL wasn't as accurate a representation of the production environment it should be!&lt;/p&gt;

&lt;p&gt;At one point, I was an engineer for an e-commerce organisation and they siloed a single developer off for two months to replicate production as a collection of docker containers we could just install on our machines. The end result was a process that took 30 minutes just to install and required the entire development team to have their hardware upgraded to at least 16 GB of RAM. Running Nginx, ElasticSearch, Redis and MySQL on a single machine apparently uses a lot of memory; who would have thought. And we STILL had constant issues when we thought our code was ready to be tested against the staging environment and it just wasn't.&lt;/p&gt;

&lt;p&gt;This is just one example of many I have to share.&lt;/p&gt;

&lt;p&gt;The TL;DR of the above? We used local testing because testing against production became too dangerous,tried to replicate production locally and failed miserably to today where we are, essentially, &lt;a href="https://www.honeycomb.io/blog/yes-i-test-in-production-and-so-do-you/"&gt;still testing against production&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now, in the world of serverless development, here we are once again, trying to make things run locally that really shouldn't. And this isn't a collection of virtual machines or docker containers we can kinda of get to run locally to some semblance of accuracy. These are cloud services for which most have no official way to run locally and probably never will. The existing emulation techniques used in tools like Localstack are impressive but not an exact replica of the cloud; they are the best effort someone has made to allow us to kind of sort of test these services locally with something resembling the cloud version. Not to mention all the aspects of the cloud (and distributed application architectures) that can throw a spanner in the works. How do you replicate intra-service latencies, IAM, service limits and so many other aspects of the cloud that &lt;strong&gt;aren't&lt;/strong&gt; related to a specific service&lt;/p&gt;

&lt;p&gt;We also don't even need to! With tools like the Serverless Framework (I know there are others I have just not used them to the same level of familiarity as the Serverless Framework) that gives you the ability to deploy the &lt;strong&gt;exact same&lt;/strong&gt; configuration of resources we deploy into production in &lt;strong&gt;any other environment&lt;/strong&gt; we choose. Want a shared environment for the developers of the team to test against? Just run the deploy command! Want your own "local" environment to test against? Just run the deploy command!&lt;/p&gt;

&lt;p&gt;Finally! We are in a position where we can 100% replicate the infrastructure in production and, because of serverless application's propensity to bill for usage, it costs you nothing to deploy them and pennies if you do testing against them!&lt;/p&gt;

&lt;p&gt;So why are we still fighting so hard to maintain the local environment? Probably because of the feared lack of productivity. To answer this, I am going to point to a recently published post by a compatriot of mine at Serverless, Inc, who wrote up a great way to look at "local" development for serverless and the very few tools you need to accomplish this. &lt;a href="https://dev.to/aws-builders/developing-against-the-cloud-55o4"&gt;Check it out here&lt;/a&gt;. The amount of time spent managing a local development environment, updating it, making sure it keeps running, is costly in itself. But there is another good reason to not consider it!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Its actually bad for your application!&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Consider a group of developers using an emulation tool like Localhost. It does an ok job at allowing the developers of the team to build and test their serverless applications locally. However, one of the members on the team spots a really useful cloud service that could be used to build the best possible solution to a problem they are trying to solve. It can improve the reliability of the application as a whole, decreases costs and time to production. However, this service is not (yet) provided by the local emulation tool.&lt;/p&gt;

&lt;p&gt;They now have three choices. Use the service anyways, meaning that testing in the cloud is now an absolute requirement but the application is now better for it. However this kind of makes the local testing environment entirely irrelevant. Or, don't use the service and essentially hamstring the efficacy of your application because the local testing environment is sacrosanct. Or lastly, spend days or maybe even weeks trying to find a way to replicate this service locally, delaying deployment of this feature and _&lt;em&gt;still&lt;/em&gt; having a sub standard replica of a cloud service to test against, assuming you find a workable solution to begin with.&lt;/p&gt;

&lt;p&gt;What about tools like serverless-offline? Nice and simple and lets you just easily test against your HTTP endpoints? Right?&lt;/p&gt;

&lt;p&gt;Well, besides the fact that, yet again, this is not an accurate representation of the cloud and completely ignores the oddities of services such as API Gateway, IAM, etc, it is also &lt;strong&gt;only good for http events&lt;/strong&gt;. More and more we see serverless applications doing more than just be glorified REST API's. You cannot test all the other events that can trigger your Lambda functions.&lt;/p&gt;

&lt;p&gt;Local development seems, at face value, to be efficient and simple. It is a necessary evil in the traditional web development world because traditional architectures are too costly and unwieldy to replicate exactly for every developer of a team. But serverless architectures cost nothing to deploy and minimal (or often free) to run tests against, and can be exact replicas of production when deployed into the cloud.&lt;/p&gt;

&lt;p&gt;Just because it is familiar doesn't mean its a good idea. With tools like the Serverless Framework and others out there offering the ability to &lt;a href="https://www.serverless.com/framework/docs/providers/aws/cli-reference/deploy-function/"&gt;deploy only code in mere seconds&lt;/a&gt;, &lt;a href="https://www.serverless.com/framework/docs/providers/aws/cli-reference/invoke/"&gt;invoke functions&lt;/a&gt; directly from your local machine to the remote Lambda and even &lt;a href="https://www.serverless.com/framework/docs/providers/aws/cli-reference/logs/"&gt;tail the logs in your terminal&lt;/a&gt; to get instant feedback on errors, you do not need to lose productivity but can drastically decrease complexity and accuracy to production.&lt;/p&gt;

&lt;p&gt;If anyone has any questions sound out in the comments or even hit me up on &lt;a href="https://twitter.com/garethmcc"&gt;Twitter&lt;/a&gt;. My DM's are open and I love discussing serverless topics!!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>testing</category>
    </item>
    <item>
      <title>The difficulty with monitoring AWS Lambda functions (and how to solve it)</title>
      <dc:creator>Gareth McCumskey</dc:creator>
      <pubDate>Tue, 20 Aug 2019 08:07:53 +0000</pubDate>
      <link>https://dev.to/garethmcc/the-difficulty-with-monitoring-aws-lambda-functions-and-how-to-solve-it-381d</link>
      <guid>https://dev.to/garethmcc/the-difficulty-with-monitoring-aws-lambda-functions-and-how-to-solve-it-381d</guid>
      <description>&lt;p&gt;If you have spent any time building out a microservices application, you have probably quickly run across the problem of monitoring your services, whether they are configured on container-based infrastructure or Serverless. Having all these individually scoped moving parts makes it that much harder to collate and then analyse log files.&lt;/p&gt;

&lt;p&gt;Solutions to this problem are pretty broad. One of the more general patterns making its way into the purely microservices realm is the idea of a service mesh usually running as a sidecar module to each service. This pattern provides a consistent method that every service needs to adhere to when it comes to, amongst other features that these service meshes provide, publishing log data. These logs can then be gathered and collated in a single source and useful metrics extracted.&lt;/p&gt;

&lt;p&gt;However, in the Serverless world, a service mesh falls short since we are using a large collection of managed services to which we have no means to configure an additional tool for this logging.&lt;/p&gt;

&lt;p&gt;So what do we do now? Just give up and assume we will be forced to analyse our CloudWatch logs manually every time an issue arises?&lt;/p&gt;

&lt;p&gt;Well, thankfully, no. Recently I started using a tool provided by the Serverless Framework team to include monitoring of my Lambda functions and more. The reason this is so compelling to me is not just because I happen to be a part of the team (but it helps), but also that the implementation is so frictionless. Being the developers of the framework helps in that you can then incorporate this monitoring capability at a very basic level into an existing Serverless service. There is no need to include any additional library into your functions to instrument them. No need to manually add additional IAM permissions (unless you choose to do so to make use of the other features of the software). It kinda just works with minimal setup.&lt;/p&gt;

&lt;p&gt;So if you are interested to find out more about the Serverless Framework Dashboard and what it offers besides monitoring, Austin Collins, CEO and founder of Serverless Inc, has put together a great 3 minute video to bundle it all together at &lt;a href="https://www.youtube.com/watch?v=-Nf0ui3qP2E" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=-Nf0ui3qP2E&lt;/a&gt;, but we are focussing primarily on the monitoring side of things.&lt;/p&gt;

&lt;p&gt;How &lt;strong&gt;do&lt;/strong&gt; we get setup for monitoring?&lt;/p&gt;

&lt;p&gt;Well, the first step is we need a Serverless Framework Dashboard account. Go to &lt;a href="https://dashboard.serverless.com" rel="noopener noreferrer"&gt;https://dashboard.serverless.com&lt;/a&gt; to get that setup. What you will get once done is an &lt;code&gt;org&lt;/code&gt; and an &lt;code&gt;app&lt;/code&gt; as you can see in this image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxxp0yd74eev2a7knejr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxxp0yd74eev2a7knejr5.png" alt="App and Org Highlighted"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open up your Serverless service's serverless.yml in your favourite text editor and add the &lt;code&gt;app&lt;/code&gt; and &lt;code&gt;org&lt;/code&gt; properties to it. I usually do this above the &lt;code&gt;service&lt;/code&gt; property:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;enterprise-demo&lt;/span&gt;
&lt;span class="na"&gt;org&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;garethmccumskey&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-email-form&lt;/span&gt;
&lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, we are almost there. We need our local machine to be able to authenticate to our Serverless Dashboard account when we deploy. To do that, just run &lt;code&gt;sls login&lt;/code&gt;. It will open a window to your default browser to authenticate. Once you see the message&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Serverless: You sucessfully logged in to Serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;on the CLI, we can now deploy.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;sls deploy&lt;/code&gt; just like you would usually do. This is necessary because it is at this stage that the Serverless Framework can now automatically instrument your functions, and subscribe to the CloudWatch logs in your account for the functions in your service.&lt;/p&gt;

&lt;p&gt;Now, just a few caveats to point out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have tried out any other monitoring tool that &lt;em&gt;also&lt;/em&gt; subscribes to your CloudWatch logs you may get an error about some CloudWatch limit reached. The solution is to either remove that subscription or just send AWS a nicely worded message via the support tool in the console and ask them if they would be so kind as to increase your CloudWatch subscription limits. We've heard they are pretty accommodating with this request.&lt;/li&gt;
&lt;li&gt;If you usually deploy via a headless CI/CD system and therefore can't use &lt;code&gt;sls login&lt;/code&gt;, then you can grab yourself some access keys instead and set things up as per &lt;a href="https://serverless.com/framework/docs/dashboard/pipelines#create-an-access-key-in-the-serverless-framework-dashboard" rel="noopener noreferrer"&gt;the docs&lt;/a&gt;. You're welcome :)&lt;/li&gt;
&lt;li&gt;Ummm, yup I think that's it. Onward!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open up your service's monitoring in the dashboard by clicking its name and then the stack instance defined by the stage and region it was deployed to. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Flyk2lgdi7d960hfalqgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Flyk2lgdi7d960hfalqgp.png" alt="Service Monitoring View"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any traffic going through that service you should be seeing the graphs responding live to invocations and errors as they happen real time!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fz9yzmvf1a26f2skvf5mw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fz9yzmvf1a26f2skvf5mw.jpg" alt="Party Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go ahead! Click around! Take a look at all that this new vista has to offer. But take special note of that alerts section you see on the screen.&lt;/p&gt;

&lt;p&gt;Once you've calmed down a little from all the excitement, there's one more surprise in store: notifications. Who wants to sit and stare at graphs all day? You've got stuff to do! So instead, head back to that original view where you could see all your services and select the notifications tab. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fiuc91nfntgb1ptdjwgv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fiuc91nfntgb1ptdjwgv3.png" alt="Notifications Default Screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, what are you waiting for? Click that link. Its asking you to! And what you should find is the ability to send yourself (or your team) a notification about any of those alerts I mentioned you should take notice of via email, Slack, SNS or even a Webhook if you so choose. &lt;/p&gt;

&lt;p&gt;Now you have no excuse when someone asks you if the current average duration of your lambda functions is above normal. If you didn't get the alert then things are fine. What about errors? Then turn on the &lt;code&gt;new error type identified&lt;/code&gt; alert notification. Want the whole team to get messages from production but only the devs to get them from the dev stage? You can do that too. Just create one notification limited to the prod stage and the other limited to the dev stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fpceutgxtybvft78ojubj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fpceutgxtybvft78ojubj.jpg" alt="Now its party time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And there we go. With that small amount of effort we instrumented an entire Serverless service and were able to get operational metrics about our current invocation rates, durations, errors, memory usage and more. And I forgot to mention you get this all for free up to 1 000 000 invocations per month as a part of the free tier as well so you can kick the tyres extensively.&lt;/p&gt;

&lt;p&gt;Personally, I use the Serverless Framework Dashboard across all my own personal projects. It's gotten to the point where I cannot build Serverless projects without having this turned on by default because it makes it so much easier to get the alerts and data I need about my service while I am developing it.&lt;/p&gt;

&lt;p&gt;And before I leave you, there is one last thing to mention. A feature that will be released really soon that excites me incredibly. I'll just drop it here as a screenshot :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ftn9ewsa9advx5il539dh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ftn9ewsa9advx5il539dh.png" alt="Invocation Detail with Spans"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>microservices</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Building applications rapidly with Serverless</title>
      <dc:creator>Gareth McCumskey</dc:creator>
      <pubDate>Mon, 29 Jul 2019 17:51:08 +0000</pubDate>
      <link>https://dev.to/garethmcc/building-applications-rapidly-with-serverless-5ci7</link>
      <guid>https://dev.to/garethmcc/building-applications-rapidly-with-serverless-5ci7</guid>
      <description>&lt;p&gt;Over the last few years, Serverless as an architectural pattern has made some noise. So much so that at one point I decided to go down the rabbit hole and give it a good look. Nearly 4 years since then, I have gotten to the point where I cannot build applications any other way; the advantages of a serverless application just so far outweigh any cons. I have also, during that time, spent a lot of time interacting with the Serverless community, trying to assist others in discovering this, frankly, revolutionary way to build software. So much so that Serverless, Inc, maintainers of the most popular Serverless application development framework, the Serverless Framework, asked me to join the team to do everything I had been doing part-time as a full-time job.&lt;/p&gt;

&lt;p&gt;Now here I am, writing this blog post I should have written years ago, hoping to introduce other developers to the sheer level of productivity and performance building Serverless applications gives you. So instead of spending the first half of my post talking theory and history like so many others, let’s get straight into actually building a simple “Getting Started” application that anyone reading this can follow along with. Why? Well, conceptually Serverless seems very abstract. It’s only when you actually build something for the first time that you realize the true power of building applications this way.&lt;/p&gt;

&lt;p&gt;First, let's get through the most annoying part. We will be building this solution on AWS, so if you don’t have an AWS account then now is the time to sign up for one. But don’t worry. What we will be building today should cost you the princely sum of $0 as AWS provides generous free tiers for the services we will be making use of and we will come nowhere near those limits.&lt;/p&gt;

&lt;p&gt;To sign up with AWS, go to &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;https://aws.amazon.com/&lt;/a&gt; and click the big orange “Create an AWS account” button. Then just follow the instructions all the way to getting the account activated. &lt;/p&gt;

&lt;p&gt;Awesome. That really was the most annoying part. Now onto the fun stuff. Let’s get ourselves set up with the &lt;a href="https://www.serverless.com/?utm_source=devio&amp;amp;utm_medium=blog&amp;amp;utm_campaign=framework-lifecycle-launch-july" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt;. To install just:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to set up our first service and to do this we will use the brand spanking new onboarding experience. On the CLI, just enter &lt;code&gt;serverless&lt;/code&gt; and hit return. Then answer the questions like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No project detected. Do you want to create a new one? (Y/n):&lt;/strong&gt; &lt;code&gt;Y&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;We pick Node.js from the list&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What do you want to call this project?:&lt;/strong&gt; I am going to name mine &lt;code&gt;serverless-quick-start&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You can monitor, troubleshoot, and test your new service with a free Serverless account.:&lt;/strong&gt; Free monitoring and testing … yes please&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Would you like to enable this? (Y/n):&lt;/strong&gt; Hit Y&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do you want to register? (Y/n):&lt;/strong&gt; If for some reason you have already signed up for a Serverless Framework account, select &lt;code&gt;n&lt;/code&gt;, otherwise choose &lt;code&gt;Y&lt;/code&gt;
*Then just provide some credentials for your new account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have run through the onboarding wizard to set up your new service and Serverless Framework account, enter &lt;code&gt;serverless dashboard&lt;/code&gt; into the CLI and you should see something in your browser like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ftnwkp16hqpnpgpt5ihao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ftnwkp16hqpnpgpt5ihao.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Serverless applications are usually made of multiple Serverless services, like the one we bootstrapped with the &lt;code&gt;serverless&lt;/code&gt; command above, each performing some specific task. Think microservices, but with less of the infrastructure headache … actually … none of the infrastructure headache.&lt;/p&gt;

&lt;p&gt;Let's click on &lt;code&gt;profiles&lt;/code&gt; in the top left. You should have only one profile listed; &lt;code&gt;default&lt;/code&gt;. Click it and you should see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fcna5lc77rq2ncywta36f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fcna5lc77rq2ncywta36f.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order for us to create our Serverless application, we need some way for our code and configuration on our local machine to get to our AWS account. If you expand the &lt;code&gt;how to add a role&lt;/code&gt; link, you should see a link for &lt;code&gt;Create a role wizard&lt;/code&gt;. Clicking that will open a new tab in your browser to your AWS account. At this point you just need to click Next through the wizard until you see a notification similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fh2nkzkc8ld3ozdkttf3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fh2nkzkc8ld3ozdkttf3k.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on that blue role name and on the next page you will see a line item labeled as &lt;code&gt;Role ARN&lt;/code&gt;. Copy that entire string that looks something like &lt;code&gt;arn:aws:iam::1234567890:role/serverless-enterprise_serverless-quick-start&lt;/code&gt;. Then go back to the console page in the browser we were on before and paste your ARN into the textbox:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Futxrhpgatd50q6i0geez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Futxrhpgatd50q6i0geez.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;save and exit&lt;/code&gt; . Now, one last step to connect what we are going to build to this new account we just created. Go back to the folder in your terminal where we bootstrapped our service and open the &lt;code&gt;serverless.yml&lt;/code&gt; file in your favorite text editor.&lt;/p&gt;

&lt;p&gt;This file is where we keep all the configuration we need to tell the Serverless Framework what to create on our AWS account. It is also where we can tell it to what organization and application to connect to on our Serverless Framework Enterprise account. To do this add the following(substituting your own details obviously) to the top of the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app: myapp
org: garethmccumskey 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So what did we just do with the console? In order for us to be able to connect to our AWS account from our local machine, we need to get credentials with the right permissions to create things like Lambda functions and HTTP endpoints. Now that the service we are building is connected via the &lt;code&gt;app&lt;/code&gt; and &lt;code&gt;org&lt;/code&gt; to our Serverless Framework Enterprise account, when we issue a deploy command, a temporary set of credentials are created and passed back to our local machine and these credentials are then used by the Serverless Framework from our machine to deploy to our AWS account.&lt;/p&gt;

&lt;p&gt;But we first need to create something to actually deploy. With the &lt;code&gt;serverless.yml&lt;/code&gt; file open let's make some more edits. Find the &lt;code&gt;service&lt;/code&gt; property and change this to some unique name for your new service. I am going to use &lt;code&gt;serverless-quick-start&lt;/code&gt;. Scrolling further down you can see we have a provider setup to be AWS (yes, the Serverless Framework can help you build Serverless applications on other providers like Azure, but we aren’t going to look at that this time), and we are going to use the Node 10 runtime for our code.&lt;/p&gt;

&lt;p&gt;Scrolling past all the commented configuration, you should find a portion that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hello&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;handler.hello&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to build serverless applications, we use a FaaS (Functions as a Service) service that AWS provides, called Lambda. Lambda allows us to upload a single piece of code that gets triggered by an event we setup. Instead of me trying to explain all of this, let's build it and you can see what I mean first hand.&lt;/p&gt;

&lt;p&gt;In our little demo, we are going to create an HTTP endpoint that returns “Hello World!”. Yup, I am entirely unoriginal, and we are doing a Hello World example. To that end, edit the configuration we saw before so that it looks like this (watch the indentation, YML gets a little angry if you don’t indent correctly):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hello&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;handler.hello&lt;/span&gt;
      &lt;span class="s"&gt;events&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's open the file &lt;code&gt;handler.js&lt;/code&gt; and edit the content to look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use strict&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hello&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello World!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And with that, drop back to the terminal and enter in &lt;code&gt;serverless deploy&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: Since you may have registered a new account when you initially did &lt;code&gt;serverless login&lt;/code&gt; you may need to do &lt;code&gt;serverless login&lt;/code&gt; again just to authenticate correctly if you get any error messages.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The deploy command should result in a bunch of stuff in your terminal like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless Enterprise: Safeguards Processing...
Serverless Enterprise: Safeguards Results:

   Summary --------------------------------------------------

   passed - no-secret-env-vars
   passed - allowed-regions
   warned - require-cfn-role
   passed - framework-version
   passed - allowed-stages
   passed - no-wild-iam-role-statements
   warned - allowed-runtimes

   Details --------------------------------------------------

   1) Warned - no cfnRole set
      details: https://git.io/fhpFZ
      Require the cfnRole option, which specifies a particular role for CloudFormation to assume while deploying.


   2) Warned - Runtime of function hello not in list of permitted runtimes: ["nodejs8.10","nodejs6.10","python3.7","python3.6","ruby2.5","java-1.8.0-openjdk","go1.x","dotnetcore2.1","dotnetcore2.0"]
      details: https://git.io/fjfkx
      Limit the runtimes that can be used.


Serverless Enterprise: Safeguards Summary: 5 passed, 2 warnings, 0 errors
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
.....
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service serverless-quick-start.zip file to S3 (66.46 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................................................
Serverless: Stack update finished...
Service Information
service: serverless-quick-start
stage: dev
region: us-east-1
stack: serverless-quick-start-dev
resources: 16
api keys:
  None
endpoints:
  GET - https://abcdefg.execute-api.us-east-1.amazonaws.com/dev/hello
functions:
  hello: serverless-quick-start-dev-hello
layers:
  None
Serverless Enterprise: Publishing service to the Enterprise Dashboard...
Serverless Enterprise: Successfully published your service to the Enterprise Dashboard: https://dashboard.serverless.com/tenants/garethmccumskey/applications/myapp/services/serverless-quick-start/stage/dev/region/us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Near the end of all that, under a section labelled &lt;code&gt;endpoints&lt;/code&gt;, a URL is provided (for example: &lt;a href="https://abcdefg.execute-api.us-east-1.amazonaws.com/dev/hello" rel="noopener noreferrer"&gt;https://abcdefg.execute-api.us-east-1.amazonaws.com/dev/hello&lt;/a&gt;). Go ahead and open that in your browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3ouia52u33kt4xrv3lut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3ouia52u33kt4xrv3lut.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Soooo …. What just happened here? We created a Lambda function that could receive a GET request over HTTP to an endpoint. The only code we wrote to do this was a few lines long but we got a lot more back … Let's look at this in a little more detail to make it apparent how cool this really is.&lt;/p&gt;

&lt;p&gt;The endpoint we now have is only accepting GET requests. We could make it a POST request and allow the function to accept data in the body. But that’s not all:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The endpoint uses AWS’s API Gateway service which can handle up to 10 000 requests per second by default and can be increased to a higher value via a support request to AWS.&lt;/li&gt;
&lt;li&gt;When this endpoint received a request, it created a request object that was then sent to a small piece of code running on AWS Lambda that we wrote.&lt;/li&gt;
&lt;li&gt;AWS Lambda by default can run 1000 copies of that code simultaneously and that concurrency can be increased with a request to AWS.&lt;/li&gt;
&lt;li&gt;We are not paying for any of the code we store in AWS nor for the endpoints. The free tier on API Gateway allows for one million API calls per month before any billing happens.&lt;/li&gt;
&lt;li&gt;We are also not paying for any execution time of the code of our function. The AWS Lambda free tier allows for 1 million requests per month and 400 000 GB seconds. Since AWS bills by every 100ms of execution time, that means that our function could run for 800 000 seconds before we get billed. If we tweaked our configuration on our &lt;code&gt;serverless.yml&lt;/code&gt; that we could even get 3.2 million seconds of free execution time.&lt;/li&gt;
&lt;li&gt;Because of the way AWS designed API Gateway and AWS Lambda, we also get a fully redundant solution spread across three data centers (AWS regions always have 3 separate data centers separated by a few miles and connected via a dedicated fiber link). It would take a region-wide catastrophe to take our endpoint down, and even then it might still be up.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, with this simple example, we configured and deployed a highly scalable, highly redundant solution that would be the envy of many a dev-ops practitioner; &lt;strong&gt;in about 15 minutes&lt;/strong&gt;. And it costs us nothing unless we use it at volume. We did not have to provision our own servers (hence Serverless), we did not have to install operating systems, runtimes, fallbacks, backups, disaster recovery, load balancing. We don’t need to monitor CPU capacity and memory.&lt;/p&gt;

&lt;p&gt;To put this into perspective another way, if you were building an application using Express or any other conventional web application framework and had to deploy this to virtual machines on AWS, to get equivalent redundancy and scalability you would need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 EC2 instances of t3.micro (the cheapest option), each in a separate availability zone in the region. This costs $0.0104 per hour each.&lt;/li&gt;
&lt;li&gt;A load balancer to help manage load across all three instances priced at $0.0225 per hour at a minimum.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The total cost of running the above comes to $22 for the three EC2 instances and $16.20 for the load balancer before any traffic has even made its way into that infrastructure. And don’t forget, to get this all set up probably took a few hours and needs to be maintained going forward; is there a critical operating system update that needs to be applied because of a new zero-day vulnerability that has been discovered? You are the one that needs to make sure the patch is applied.&lt;/p&gt;

&lt;p&gt;And you more than likely need EC2 instances bigger than t3.micro’s. I would estimate that the average web application serious about serving traffic has to, at a minimum, have 3 t3.large instances which cost $0.0832 per hour. That means instead of $22, you would be spending about $60. Again, this is before any traffic even arrives. The point of a load balancer is that you can make it scale and spin up even more EC2 instances, adding to that bill.&lt;/p&gt;

&lt;p&gt;In contrast, a serverless application costs $0 when idle (not $60 + $16.20), and scaling up and down is instantaneous. Don’t have any traffic at 1am when all your customers are asleep? Then why are you paying for anything?&lt;/p&gt;

&lt;p&gt;And since we’re looking at the differences, on your command line run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless logs -f hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get logging as a part of the entire solution. With our virtual machine equivalent we discussed that cost us nearly $40 - $80 per month, we don’t have any easy way yet to view our logs. That still needs to be configured. Which adds to the bill. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F002jr5e2quuuhh7biawz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F002jr5e2quuuhh7biawz.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go back to your Serverless Framework account in the browser, click &lt;code&gt;applications&lt;/code&gt;, expand your application open and you will see the service you just deployed listed. Open your service and feast your eyes on the detailed statistics about the service you just deployed; how many times it was executed, any errors, deployments, cold starts. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F2cvqvfahjw3cnwyq3nsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F2cvqvfahjw3cnwyq3nsn.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Granted this was a very limited example, but if you take a look at the &lt;a href="https://serverless.com/framework/docs/?utm_source=devio&amp;amp;utm_medium=blog&amp;amp;utm_campaign=framework-lifecycle-launch-july" rel="noopener noreferrer"&gt;wealth of documentation available on the Serverless website&lt;/a&gt; as well as the large number of &lt;a href="https://serverless.com/examples/?utm_source=devio&amp;amp;utm_medium=blog&amp;amp;utm_campaign=framework-lifecycle-launch-july" rel="noopener noreferrer"&gt;examples posted by the community&lt;/a&gt;, you can immediately see that serverless applications are good for more than just tiny little GET requests.&lt;/p&gt;

&lt;p&gt;All of this might be quite a bit to take in. And if your interest happens to be piqued, where to now right? Well, the Serverless Framework has some pretty good documentation to help get you started.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://serverless.com/framework/docs/?utm_source=devio&amp;amp;utm_medium=blog&amp;amp;utm_campaign=framework-lifecycle-launch-july" rel="noopener noreferrer"&gt;Main documentation about the framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverless.com/examples/?utm_source=devio&amp;amp;utm_medium=blog&amp;amp;utm_campaign=framework-lifecycle-launch-july" rel="noopener noreferrer"&gt;Examples to take a look at&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverless.com/blog/?utm_source=devio&amp;amp;utm_medium=blog&amp;amp;utm_campaign=framework-lifecycle-launch-july" rel="noopener noreferrer"&gt;The blog that has a good collection of how to’s and use cases from real companies developing applications&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any questions at all about Serverless feel free to hit me up here or via &lt;a href="https://twitter.com/garethmcc/" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;. There is also the Serverless Framework community at &lt;a href="https://forum.serverless.com" rel="noopener noreferrer"&gt;the forums&lt;/a&gt; and the &lt;a href="https://www.serverless.com/slack?utm_source=devio&amp;amp;utm_medium=blog&amp;amp;utm_campaign=framework-lifecycle-launch-july" rel="noopener noreferrer"&gt;Slack Workspace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>webdev</category>
      <category>api</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
