<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tom McLaughlin</title>
    <description>The latest articles on DEV Community by Tom McLaughlin (@tmclaughbos).</description>
    <link>https://dev.to/tmclaughbos</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tmclaughbos"/>
    <language>en</language>
    <item>
      <title>Serverless Ops: Infrastructure As Code With AWS Serverless</title>
      <dc:creator>Tom McLaughlin</dc:creator>
      <pubDate>Tue, 01 May 2018 10:30:00 +0000</pubDate>
      <link>https://dev.to/tmclaughbos/serverless-ops-infrastructure-as-code-with-aws-serverless-45en</link>
      <guid>https://dev.to/tmclaughbos/serverless-ops-infrastructure-as-code-with-aws-serverless-45en</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a continuation of our &lt;a href="https://dev.to/tmclaughbos/serverless-ops-what-do-we-do-when-the-server-goes-away-1mln"&gt;“Serverless Ops: What do we do when the server goes away?”&lt;/a&gt; series on defining the role of operations and DevOps engineers when working with serverless infrastructure.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you like what you read, there's more on the &lt;a href="https://www.serverlessops.io/blog" rel="noopener noreferrer"&gt;ServerlessOps Blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Finfra-as-code.png%3Ft%3D1525144819454%26width%3D600%26name%3Dinfra-as-code.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Finfra-as-code.png%3Ft%3D1525144819454%26width%3D600%26name%3Dinfra-as-code.png" alt="infra-as-code"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For years, as a part of DevOps we’ve talked about infrastructure as code. As operations we went from hand building systems to automating the work with code. In some cases we even gave software developers access to build their own systems. It has become an integral part of DevOps, but have you thought about what infrastructure as code means, how it’s maybe changed, and what it means to implement in a serverless environment?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Infrastructure as Code?
&lt;/h2&gt;

&lt;p&gt;At the highest and most generic level, infrastructure as code can be defined as the following:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The provisioning of computer and related resources, typically by IT operations teams, as code as opposed to human provisioning by that IT operations team.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is code that lets us setup a server. It’s code that lets us define and provision an autoscaling group. It’s code that deploys a Kubernetes cluster or Vault.&lt;/p&gt;

&lt;p&gt;That sort of definition is certainly correct, but it leads operations engineers to the wrong conclusions about what their own value is; it promotes the idea that our greatest value is performing engineering work. It promotes the idea that the value of operations is mostly in executing work.&lt;/p&gt;

&lt;p&gt;I don’t like this devaluation of operations.&lt;/p&gt;

&lt;p&gt;Too many operations engineers fall into this trap of believing their value is building infrastructure and keeping it running. This is a problem when we’re looking at more infrastructure work being assumed by public cloud providers. What do you do when AWS is able to provide a container management platform or secrets management just as well as, and faster than you can?&lt;/p&gt;

&lt;p&gt;Instead, here is what I’d like to use as an updated definition:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The distillation of operational knowledge and expertise into code that allows for someone (who may not be the code author) to safely provision and manage infrastructure.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are two aspects of that definition which I think set it apart from the previous.&lt;/p&gt;

&lt;p&gt;First, “&lt;em&gt;distillation of operational knowledge and expertise”&lt;/em&gt;. It is not the ability to deploy or setup something. It’s not the ability to automate the deployment of Kubernetes or Vault (Nothing against Kubernetes or Vault, these are just things I know people like to set up). The average engineer can probably accomplish that in a weekend if they wanted to. The value of an operations engineer is in being able to set up a service in a manner that is manageable, reliable, and maintainable. This is the expertise we have gained through experience, practice, and continuous learning. It is what separates us from those who are inexperienced in our field.&lt;/p&gt;

&lt;p&gt;The second important part of that definition is “&lt;em&gt;someone (who may not be you)”&lt;/em&gt;. The ability to setup or configure something is undifferentiated labor. There’s nothing special about the ability to setup a computer system. This work most engineers can accomplish in a weekend. When I first started to learn about infrastructure as code and infrastructure automation, the most exciting prospect to me was the ability to let someone else do this undifferentiated work. If you’ve automated your infrastructure but you are still required to click a button when someone else requests resources, you’re still relying on that undifferentiated work as your value. You’ve just gone from being a manual bottleneck in service delivery to an automated one.&lt;/p&gt;

&lt;p&gt;Anyone with enough time and effort can read documentation and figure out how to automate the deployment of something. As operations engineers, we need to focus on the things not always mentioned in the documentation and what we’ve learned through experience.&lt;/p&gt;

&lt;p&gt;That is what we need to turn into code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code Today
&lt;/h2&gt;

&lt;p&gt;When discussing infrastructure as code today, we often think of Puppet, Chef, etc. for managing our host infrastructure. In the Puppet infrastructures I managed, we used the &lt;a href="https://www.craigdunn.org/2012/05/239/" rel="noopener noreferrer"&gt;Puppet roles and profiles&lt;/a&gt; pattern. (If you’re a Chef user, the pattern is &lt;a href="https://blog.chef.io/2017/02/14/writing-wrapper-cookbooks/" rel="noopener noreferrer"&gt;Chef wrapper cookbooks&lt;/a&gt;.) We divided configuration management into three layers of classes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Role class&lt;/li&gt;
&lt;li&gt;Profile class&lt;/li&gt;
&lt;li&gt;Service class&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At the bottom was the service class, which considered undifferentiated labor. For almost everything we needed, there existed a Puppet module on &lt;a href="https://forge.puppet.com/" rel="noopener noreferrer"&gt;Puppet Forge&lt;/a&gt;. This code was not special or unique to our organization, and we felt that recreating the pattern below with this Nginx class, when a suitable module already existed, was not worth our time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class nginx {
  package { 'nginx':
    ensure =&amp;gt; present
  }

  file { '/etc/nginx/nginx.conf':
    ensure =&amp;gt; present,
    owner =&amp;gt; 'root',
    Group =&amp;gt; 'root',
    Mode =&amp;gt; '0644',
    source =&amp;gt; 'puppet:///modules/nginx/nginx.conf',
    require =&amp;gt; Package['nginx'],
    notify =&amp;gt; Service['nginx']
  }

  service { 'nginx':
    ensure =&amp;gt; running,
    enabled =&amp;gt; true,
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not to mention, often the person who wrote the Puppet Forge module demonstrated a level of understanding of the service beyond what we possessed. Of course at times, we forked the module if we took issue with how it worked or saw bugs. But this was ultimately code we didn’t want to write or own.&lt;/p&gt;

&lt;p&gt;In the middle of those classes was the profile class; this was the most important class to us in operations. The profile class was where we turned our operational knowledge into infrastructure code. This required me to understand both Nginx and its context within my organization. Coding the &lt;em&gt;my::nginx&lt;/em&gt; class below was a better use of our time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class my::nginx {

  # From Puppet Forge module
  class { '::nginx': }

  # Status endpoint for Sensu Check
  nginx::resource::server { "${name}.${::domain} ${name}":
    ensure =&amp;gt; present,
    listen_port =&amp;gt; 443,
    www_root =&amp;gt; '/var/www/',
    ssl =&amp;gt; true,
    ssl_cert =&amp;gt; '/path/to/wildcard_mydomain.crt',
    ssl_key =&amp;gt; '/path/to/wildcard_mydomain.key',
  }

  nginx::resource::location { "${name}_status":
    ensure =&amp;gt; present,
    ssl =&amp;gt; true,
    ssl_only =&amp;gt; true,
    server =&amp;gt; "${name}.${::domain} ${name}",
    status_stub =&amp;gt; true
  }

  # Sensu checks
  package { 'sensu-plugins-nginx':
    provider =&amp;gt; sensu_gem
  }

  sensu::check { 'nginx-status':
    handlers =&amp;gt; 'default',
    command =&amp;gt; 'check-nginx-status.rb -p 443',
    custom =&amp;gt; {
      refresh =&amp;gt; 60,
      occurrences =&amp;gt; 2,
    },
  }

  # Log rotation
  logrotate::rule { 'nginx_access':
    path =&amp;gt; '/var/log/nginx/access.log',
    rotate =&amp;gt; 5,
    rotate_every =&amp;gt; 'day',
    postrotate =&amp;gt; 'service nginx reload',
  }

  logrotate::rule { 'nginx_error':
    path =&amp;gt; '/var/log/nginx/error.log',
    rotate =&amp;gt; 5,
    rotate_every =&amp;gt; 'day',
    postrotate =&amp;gt; 'service nginx reload',
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Puppet class above leaves the package, config, and service pattern work to a module from Puppet Forge. I spent my time instead building a Puppet class that consumed what was obtained from Puppet Forge and added my knowledge of running reliable services and Nginx to it.&lt;/p&gt;

&lt;p&gt;The Nginx server forces SSL and points to the location of our SSL certificates on the host because we don’t allow unencrypted web traffic in the infrastructure. The &lt;em&gt;my::nginx&lt;/em&gt; class adds a status check endpoint using a native Nginx capability and installs a Sensu plugin to check the endpoint because we use Sensu for monitoring. Lastly it configures log rotation to help prevent the host’s disk from becoming full. A check for disk space would already be configured on all hosts as a part of the standard OS setup.&lt;/p&gt;

&lt;p&gt;Finally, at the top was the role class. This was a collection of profile classes. A role class represented a business function. We didn’t deploy nginx, we deployed a web application that was served up by Nginx. Our goal with role classes was to provide the ability to quickly put one together composed of profile classes and deploy a new service. All the while, we could maintain a high degree of confidence in success because of the work we put in at the profile layer to create a manageable, reliable, and maintainable service. Unlike other organizations, we didn’t want operations to be thorough gatekeepers.&lt;/p&gt;

&lt;p&gt;Your job as an ops engineer is to distill your expertise into code at the profile (Puppet), wrapper (Chef) level, or whatever middle abstraction layer that exists in your tooling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Finfra-as-code-you-are-here.png%3Ft%3D1525144819454%26width%3D600%26height%3D291%26name%3Dinfra-as-code-you-are-here.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Finfra-as-code-you-are-here.png%3Ft%3D1525144819454%26width%3D600%26height%3D291%26name%3Dinfra-as-code-you-are-here.png" alt="infra-as-code-you-are-here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That is what we did above at the profile layer. We understood the operational issues around Nginx and web servers, and handled them with Puppet code. We did this by setting reasonable defaults, ensuring proper monitoring, and added configuration to prevent a foreseeable issue. This is valuable work. This is a combination of work that can’t be readily found on the internet because of your unique requirements, and work that incorporates your particular valuable skills as an operations engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code in AWS
&lt;/h2&gt;

&lt;p&gt;Let’s now take a look at infrastructure as code in the AWS management space. Let’s establish two things up front:&lt;/p&gt;

&lt;p&gt;First, AWS provides a configuration management system called CloudFormation, which provides the ability to configure any AWS resource already. There is no work for you to perform in order to give someone the ability to create an S3 bucket with CloudFormation. This is roughly equivalent to AWS owning the service level work.&lt;/p&gt;

&lt;p&gt;Second, with the infrastructure definition living with the application code, a developer doesn’t need you in order to create changes to their application. This is roughly equivalent to developers fully owning the role layer now.&lt;/p&gt;

&lt;p&gt;Much of the undifferentiated work of infrastructure as code when you get to AWS doesn’t even exist as an option for you to perform, and your ability to be a gatekeeper becomes substantially limited. We need to define what that profile layer work looks like with CloudFormation. What does turning our operational expertise into code mean when going managing AWS resources and going serverless?&lt;/p&gt;

&lt;p&gt;We can start with a basic SQS resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  MyQueue:
    Type: "AWS::SQS::Queue"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the least amount of CloudFormation required to provision an SQS queue. Now let’s begin adding our operational knowledge to create something that is manageable, reliable, and maintainable. mIf you’re familiar with CloudFormation you may see some issues in the proceeding examples. We’ll get that afterwards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reasonable Configuration
&lt;/h3&gt;

&lt;p&gt;We’ll begin by providing reasonable configuration of the SQS queue. In just about every organization, there exists rules about how services are implemented. Do you have a reason to enable server-side encryption across queues in your organization? If you do, you should probably not be leaving it up to every individual developer to remember to enable this. The CloudFormation to setup an SQS queue with server-side encryption should now look something roughly like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Mappings:
  KmsMasterKeyId:
    us-east-1:
      Value: "arn:aws:kms:us-east-1:123456789012:alias/aws/sqs"
    us-east-2:
      Value: "arn:aws:kms:us-east-2:123456789012:alias/aws/sqs"

Resources:
  MyQueue:
    Type: "AWS::SQS::Queue"
    Properties:
      KmsMasterKeyId:
        Fn::FindInMap:
          - KmsMasterKeyId,
          - Ref: "AWS::Region"
          - "Value"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting the KMS key value is now handled automatically through a lookup based on the region the SQS queue is being created in. Neither the software engineer nor I need to worry about including this configuration.&lt;/p&gt;

&lt;p&gt;Now move onto the &lt;em&gt;MessageRetentionPeriod&lt;/em&gt; setting on the queue; the default value is four days. Think about that for a moment. Is that sort of timeout actually useful for you? Should you be failing faster and dropping the message to a dead letter queue so you can attempt to recover faster? Can a one size fits all value even be possible? Let’s add a configurable parameter with a reasonable default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
  MessageRetentionPeriod:
    Type: Number
    Description: "SQS message retention period"
    General: 1800

Mappings:
  KmsMasterKeyId:
    us-east-1:
      Value: "arn:aws:kms:us-east-1:123456789012:alias/aws/sqs"
    us-east-2:
      Value: "arn:aws:kms:us-east-2:123456789012:alias/aws/sqs"

Resources:
  MyQueue:
    Type: "AWS::SQS::Queue"
    Properties:
      KmsMasterKeyId:
        Fn::FindInMap:
          - KmsMasterKeyId,
          - Ref: "AWS::Region"
          - "Value"
      MessageRetentionPeriod:
        Ref: MessageRetentionPeriod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ve added the &lt;em&gt;MessageRetentionPeriod&lt;/em&gt; parameter with a 30 minute default. However, if a system requires a longer processing period or a faster failure the value is still configurable for these for outlier systems.&lt;/p&gt;

&lt;p&gt;We’ve provided what we consider to be reasonable configuration of the SQS queue within our organization. We’ve done this in a manner that sets these values automatically so that we don’t need to think about this when creating a new queue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reliability
&lt;/h3&gt;

&lt;p&gt;We can now move onto ensuring the reliability of the SQS queue. We’re not responsible for operating SQS; we’re responsible for ensuring the free and regular flow of messages through our application via an SQS queue. We’ll check for messages being produced onto the queue and for messages being consumed from the queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Parameters:
  MessageRetentionPeriod:
    Type: Number
    Description: "SQS message retention period"
    Default: 60
  AlarmTopic:
    Type: String
    Description: "CloudWatch Alarms tpic name"
    Default: "CloudWatchAlarmsTopic"
  QueueLowMessagesReceivedThreshold:
    Type: String
    Description: "Low message alarm threshold"
  QueueDepthAlarmThreshold:
    Type: String
    Description: "SQS queue depth alarm value"

Mappings:
  KmsMasterKeyId:
    us-east-1:
      Value: "arn:aws:kms:us-east-1:123456789012:alias/aws/sqs"
    us-east-2:
      Value: "arn:aws:kms:us-east-2:123456789012:alias/aws/sqs"

Resources:
  MyQueue:
    Type: "AWS::SQS::Queue"
    Properties:
      KmsMasterKeyId:
        Fn::FindInMap:
          - KmsMasterKeyId,
          - Ref: "AWS::Region"
          - "Value"
      MessageRetentionPeriod:
        Ref: MessageRetentionPeriod

  QueueLowMessagesReceived:
    Type: "AWS::CloudWatch::Alarm"
    Properties:
      AlarmDescription: "Alarm on messages received lower than normal."
      Namespace: "AWS/SQS"
      MetricName: "NumberOfMessagesSent"
      Dimensions:
        - Name: "QueueName"
          Value:
            Fn::GetAtt:
              - "MyQueue"
              - "QueueName"
      Statistic: "Sum"
      Period: "300" # This is as granular as we get from AWS.
      EvaluationPeriods: "3"
      Threshold:
        Ref: "QueueLowMessagesReceivedThreshold"
      ComparisonOperator: "LessThanThreshold"
      AlarmActions:
        - Ref: "AlarmTopic"

  QueueDepth:
    Type: "AWS::CloudWatch::Alarm"
    Properties:
      AlarmDescription: "Alarm on high queue depth."
      Namespace: "AWS/SQS"
      MetricName: "ApproximateNumberOfMessagesVisible"
      Dimensions:
        - Name: "QueueName"
          Value:
            Fn::GetAtt:
              - "MyQueue"
              - "QueueName"
      Statistic: "Sum"
      Period: "300" # This is as granular as we get from AWS.
      EvaluationPeriods: "1"
      Threshold:
        Ref: "QueueDepthAlarmThreshold"
      ComparisonOperator: "GreaterThanThreshold"
      AlarmActions:
        - Ref: "AlarmTopic"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two CloudWatch alarms, along with some parameters, have been added. The CloudWatch alarm &lt;em&gt;QueueLowMessagesReceived&lt;/em&gt; checks to ensure that messages are being produced onto the queue. The new &lt;em&gt;QueueLowMessagesReceivedThreshold&lt;/em&gt; parameter has no default and requires you to take a few moments to think about your system’s traffic pattern.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;QueueDepth&lt;/em&gt; CloudWatch alarm will alert if the number of approximate messages in the queue is above a given threshold, indicating that consumers are unable to keep up with the rate of messages. Additionally, the &lt;em&gt;QueueDepthAlarmThreshold&lt;/em&gt; parameter does not have a default value in order to force you to think about expected message rates.&lt;/p&gt;

&lt;p&gt;There’s also a parameter value used by both alarms to pass alarm state changes to an SNS topic. That SNS topic routes to your monitoring and alerting system to let you know when there is an issue with this SQS queue. You as an ops person should own that alerting pipeline in your environment.&lt;/p&gt;

&lt;p&gt;We’re no longer deploying just an SQS queue now, we’re deploying a queue along with CloudWatch resources in order to help us maintain the reliability of this queue. This is just like adding the Sensu checks for the Nginx service in the Puppet example earlier. Most of all, this is applying our operational expertise in understanding how this service could fail in our application.&lt;/p&gt;

&lt;h3&gt;
  
  
  System Patterns
&lt;/h3&gt;

&lt;p&gt;Lastly, providing resource patterns and an explanation of when to use the pattern is a part of the job of operations. When all you have is a hammer, everything becomes a nail. Preventing the misapplication of design patterns is something that we should be looking out for.&lt;/p&gt;

&lt;p&gt;When do you use an SQS queue? What about SNS? Both together? And we haven’t even gotten to Kinesis. Sometime, the exact AWS resource patterns to use aren’t properly thought out. Engineers often end up reusing what they did last time because that’s what they know.&lt;/p&gt;

&lt;p&gt;Right now, the options for this are a little limited, but we’ll discuss what is coming in the serverless tools ecosystem shortly to help us provide a library of patterns to help engineers to make the right choices.&lt;/p&gt;

&lt;h2&gt;
  
  
  State of AWS Tools
&lt;/h2&gt;

&lt;p&gt;What I showed above may seem great, but it’s not immediately usable today. Remember: operations should be turning their knowledge into code that can be consumed by others. You could possibly achieve this with CloudFormation nested stacks, but you’d have to build your own registry of CloudFormation snippets and a discovery mechanism. If you’re familiar with CloudFormation you may have noticed this issue. Unlike the Puppet example earlier, there is no readily available means of providing an infrastructure as code profile layer for AWS services. Let’s talk about what we can do today and what’s coming.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You Can Do Today with Serverless
&lt;/h3&gt;

&lt;p&gt;If you’re looking for work to do today, then look at &lt;a href="https://www.serverless.com" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt;. Serverless Framework’s tool provides the capabilities to make the most of your operational knowledge. Serverless Framework is based on top of CloudFormation so the syntax isn’t wildly different, and it adds several features that make it easier to work with.&lt;/p&gt;

&lt;p&gt;Start with &lt;a href="https://github.com/serverless/plugins" rel="noopener noreferrer"&gt;Serverless Framework’s plugin&lt;/a&gt; capabilities. This capability alone is one of the main reasons I love this tool for managing serverless applications. If your use case is common enough, then there’s a good chance a plugin may already exist. If a plugin doesn’t exist, learn the plugin API and write some JavaScript to make it exist.&lt;/p&gt;

&lt;p&gt;In fact, the example CloudWatch alarm for &lt;em&gt;QueueDepth&lt;/em&gt; was inspired by the &lt;a href="https://github.com/sbstjn/serverless-sqs-alarms-plugin" rel="noopener noreferrer"&gt;serverless-sqs-alarms-plugin&lt;/a&gt; plugin which I regularly use. The plugin could also be extended to handle the &lt;em&gt;QueueLowMessagesReceived&lt;/em&gt; alarm as well, and at some point I should submit a PR to extend it.&lt;/p&gt;

&lt;p&gt;The down side of this approach is it requires someone to remember to add the plugin and configure it. You’d still need to perform code reviews to ensure the alarms are in place. However, the plugin does greatly reduce the amount of configuration that one needs to write. Just compare the following CloudFormation and Serverless Framework snippets.&lt;/p&gt;

&lt;p&gt;CloudFormation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;QueueDepth:
    Type: "AWS::CloudWatch::Alarm"
    Properties:
      AlarmDescription: "Alarm on high queue depth."
      Namespace: "AWS/SQS"
      MetricName: "ApproximateNumberOfMessagesVisible"
      Dimensions:
        - Name: "QueueName"
          Value:
            Fn::GetAtt:
              - "MyQueue"
              - "QueueName"
      Statistic: "Sum"
      Period: "300" # This is as granular as we get from AWS.
      EvaluationPeriods: "1"
      Threshold:
        Ref: "QueueDepthAlarmThreshold"
      ComparisonOperator: "GreaterThanThreshold"
      AlarmActions:
        - Ref: "AlarmTopic"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Serverless Framework:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;custom:
  sqs-alarms:
    - queue: MyQueue
      topic: AlarmTopic
      thresholds:
        - 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Serverless Framework templates is a feature that I don’t think gets enough attention. I became tired of initialing a new AWS Lambda Python 3 project to be just the way I create my services, that I created my own template project. Now I can initialize a new project with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless create -u https://github.com/ServerlessOpsIO/sls-aws-python-36 -p |PATH| -n |NAME|
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My &lt;a href="https://github.com/ServerlessOpsIO/sls-aws-python-36/" rel="noopener noreferrer"&gt;AWS Lambda Python 3 template&lt;/a&gt; handles standard setup such as logging, project stage and AWS profile handling based on how I segregate stages and accounts, a plugin to handle Python dependencies, and a lit of standard dependencies required. Eventually I plan on adding a unit, integration, and load testing skeleton configuration. The purpose of all this is to provide the ability to quickly setup a project with best practices already established. I want to focus on the application and not all this boilerplate setup.&lt;/p&gt;

&lt;p&gt;Additionally, I also have a &lt;a href="https://github.com/ServerlessOpsIO/sls-aws-s3-website" rel="noopener noreferrer"&gt;template for creating an AWS S3 hosted website&lt;/a&gt;. I created this higher order application pattern because I find it usable enough and something I will probably wish to repeat.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You Will Be Able to Do Tomorrow
&lt;/h3&gt;

&lt;p&gt;What we’ll be able to do tomorrow is something I’m very interested in. Right now, there’s &lt;a href="https://aws.amazon.com/serverless/serverlessrepo/" rel="noopener noreferrer"&gt;AWS Serverless Application Repository&lt;/a&gt; (SAR) and &lt;a href="https://github.com/serverless/components" rel="noopener noreferrer"&gt;Serverless Framework’s Components&lt;/a&gt;. Both of these are great and nearly providing what I’m looking for.&lt;/p&gt;

&lt;p&gt;I’ve talked about &lt;a href="https://dev.to/tmclaughbos/using-nanoservices-to-build-serverless-applications-17ci"&gt;AWS Serverless Application Repository and nanoservices&lt;/a&gt; before. Right now, most people are publishing small fully working applications, but some of us have gone ahead and started to see what we can create. We’ve created nanoservices such as our &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:641494176294:applications~ApplicationCostMonitoring" rel="noopener noreferrer"&gt;service to parse the AWS Cost and Usage report&lt;/a&gt;, or a service from AWS to &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:077246666028:applications~aws-serverless-twitter-event-source" rel="noopener noreferrer"&gt;fetch tweets with a certain phrase or word from the Twitter stream&lt;/a&gt;. Eventually, you will be able to use SAR to construct applications. AWS SAR doesn’t yet have the ability to nest nanoservices, which is where the next tool appears to excel.&lt;/p&gt;

&lt;p&gt;Just recently, Serverless Framework released Components. I’m excited for this! While similar to AWS SAR, it lets you nest components, which is key to recreating the roles, profiles, and services pattern of configuration management. Most of the &lt;a href="https://github.com/serverless/components/tree/master/examples" rel="noopener noreferrer"&gt;current examples in the GitHub repository&lt;/a&gt; are higher order services akin to roles. The &lt;a href="https://github.com/serverless/components/tree/master/registry" rel="noopener noreferrer"&gt;current components in the registry&lt;/a&gt; are limited, but I know they’re working on adding more. As the component registry grows, I can start combining them to create reusable resource patterns that require minimal configuration, but also provide flexibility to meet the needs of engineers.&lt;/p&gt;

&lt;p&gt;With Serverless Framework Components, we will &lt;em&gt;finally&lt;/em&gt; be able to empower developers to create an SQS queue with reasonable configuration and appropriate monitoring without the need for our regular oversight! As operations engineers, our work will continue with building out the library of patterns. It won’t just be documenting the inputs and outputs of the different patterns; instead, it’ll be documenting when the pattern should be used to satisfy technical concerns, as well as its impact on business concerns.&lt;/p&gt;

&lt;p&gt;If you’re used to writing system documentation, writing architecture pattern documentation will be similar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Finfra-as-code-sqs-pattern.png%3Ft%3D1525144819454%26width%3D720%26height%3D482%26name%3Dinfra-as-code-sqs-pattern.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Finfra-as-code-sqs-pattern.png%3Ft%3D1525144819454%26width%3D720%26height%3D482%26name%3Dinfra-as-code-sqs-pattern.png" alt="infra-as-code-sqs-pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Your value is not in the setup of infrastructure. Your value is turning your operational knowledge and expertise into reusable code.&lt;/p&gt;

&lt;p&gt;Have thoughts on this? Find me on twitter at &lt;a href="https://twitter.com/tmclaughbos" rel="noopener noreferrer"&gt;@tmclaughbos&lt;/a&gt;, or visit the &lt;a href="https://www.serverlessops.io/" rel="noopener noreferrer"&gt;ServerlessOps homepage&lt;/a&gt; and chat via the website.  There's also more great blogs like this on the &lt;a href="https://www.serverlessops.io/blog" rel="noopener noreferrer"&gt;ServerlessOps Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftrack.hubspot.com%2F__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fserverless-ops-infrastructure-as-code-with-aws-serverless%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftrack.hubspot.com%2F__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fserverless-ops-infrastructure-as-code-with-aws-serverless%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>operations</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using Nanoservices To Build Serverless Applications</title>
      <dc:creator>Tom McLaughlin</dc:creator>
      <pubDate>Tue, 20 Mar 2018 10:00:00 +0000</pubDate>
      <link>https://dev.to/tmclaughbos/using-nanoservices-to-build-serverless-applications-17ci</link>
      <guid>https://dev.to/tmclaughbos/using-nanoservices-to-build-serverless-applications-17ci</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Fnanoservices.png%3Ft%3D1521502962558%26width%3D800%26height%3D419%26name%3Dnanoservices.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhs-fs%2Fhubfs%2Fblog%2Fnanoservices.png%3Ft%3D1521502962558%26width%3D800%26height%3D419%26name%3Dnanoservices.png" alt="nanoservices.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://aws.amazon.com/blogs/aws/now-available-aws-serverless-application-repository/" rel="noopener noreferrer"&gt;recent general availability status&lt;/a&gt; of &lt;a href="https://aws.amazon.com/serverless/serverlessrepo/" rel="noopener noreferrer"&gt;AWS Serverless Application Repository (SAR)&lt;/a&gt; represents a milestone, and facilitates a significant leap in how we will build applications in the coming years. SAR lets us release reusable domain logic — nanoservices — that we can use to compose our own functional applications. I’m not alone in this thought. Over the past few weeks, I’ve talked with or seen multiple people in the serverless space independently arriving at similar conclusions. &lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-975060195264131074-228" src="https://platform.twitter.com/embed/Tweet.html?id=975060195264131074"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-975060195264131074-228');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=975060195264131074&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-975778546906902528-873" src="https://platform.twitter.com/embed/Tweet.html?id=975778546906902528"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-975778546906902528-873');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=975778546906902528&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;People are beginning to want to search for domain logic, either open source or inner source code, in the same way they use GitHub, NPM, PyPi, etc. Achieving the ability to easily search for high level code that can be assembled to form usable applications with minimal extra coding effort will have a significant impact in the rate at which we can deliver new services in our environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Nanoservice?
&lt;/h2&gt;

&lt;p&gt;Let’s start with a definition of what a nanoservice is. You may see the term and think it’s just a microservice that has been broken down into very small pieces. With such a loose definition you end up just quibbling over size. Instead, let’s assign some more definitive characteristics.&lt;/p&gt;

&lt;p&gt;A nanoservice is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployable&lt;/li&gt;
&lt;li&gt;Reusable&lt;/li&gt;
&lt;li&gt;Useful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A nanoservice being deployable means it also contains the infrastructure definition for itself that can be used by tooling to deploy the service. This definition handles every resource within the nanoservice’s boundaries. At its boundaries, the nanoservice can be told where to find another nanoservice, eg. a parameter value to help it find the event source in another nanoservice that will produce a trigger, and at the other end it will export resource information to allow another service to use it, eg. an event source name another nanoservice can use.&lt;/p&gt;

&lt;p&gt;A nanoservice should be reusable. It should not necessarily be tied to its current use case. However, this characteristic may be challenging in light of the last characteristic, but there is nothing that should inherently keep it from being reusable except the imagination to build a different application.&lt;/p&gt;

&lt;p&gt;Finally, the service is useful. In particular, it has domain logic that solves a problem or does something. This differentiates it from a generic software library. &lt;a href="https://boto3.readthedocs.io/" rel="noopener noreferrer"&gt;Boto 3&lt;/a&gt; is a library that gives you the ability to interact with AWS and for instance fetch an S3 object. A nanoservice on the other hand when triggered, knows what object to fetch, the format of the data in order to parse it, and publish that data so another nanoservice may consume it. All the while, handling mundane details like error handling. To me, the usefulness is what distinguishes a nanoservice the most. A person needs only to deploy and potentially combine it with another nanoservice in order to see value.&lt;/p&gt;

&lt;p&gt;Two characteristics I do not list are “usable” and “does only one thing”. A nanoservice may be usable on its own but it doesn’t have to be. Trying to make a nanoservice usable on its own may hamper it’s reusability and defeat the purpose. Additionally, I don’t want to get into arguments over whether a set of functions that provide get, set, and search for datastore objects is three nanoservices or one nanoservice that provides an interface to a datastore. Just use your best judgement and organize your code accordingly.&lt;/p&gt;

&lt;p&gt;A nanoservice is more complex than a software library but less so than the average microservice. In the end you should be able to take nanoservices and group them together to form a usable application. A nanoservice is reasonably different enough to be its own thing, and warrant its own distinct term rather than reusing other common terms.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Road To Discovering Nanoservices
&lt;/h2&gt;

&lt;p&gt;I came to realize the power of nanoservices while building a small application to monitor AWS &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-costusage.html" rel="noopener noreferrer"&gt;Cost and Usage reports&lt;/a&gt; which I also wanted to open source so others could use it. This service is called &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:641494176294:applications~ApplicationCostMonitoring" rel="noopener noreferrer"&gt;ApplicationCostMonitoring&lt;/a&gt; (ACM).&lt;/p&gt;

&lt;p&gt;For a while, I’ve been fascinated about how transformative Lambda’s consumption-based billing will be. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;X: I don't get why billing per function is such a big deal.&lt;br&gt;&lt;br&gt;
Me: Many in 2007 didn't understand why compute as a utility (i.e. EC2) was a big deal. They thought it was just about cheaper servers. They missed the entire point. You're doing the same with serverless.&lt;/p&gt;

&lt;p&gt;— swardley (@swardley) &lt;a href="https://twitter.com/swardley/status/972529016333848576?ref_src=twsrc%5Etfw" rel="noopener noreferrer"&gt;March 10, 2018&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Consumption-based billing (compared to capacity-based billing) will provide much finer-grained system cost data, which allows us to assign a direct cost to any change to a running system. Years from now, I believe we’ll even be monitoring cost as a system metric just as we do current performance metrics.&lt;/p&gt;

&lt;p&gt;To facilitate exploring that idea, I started building an application to begin giving me the ability to analyze the AWS Cost and Usage report for my accounts. The application would do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trigger by the delivery of a billing report&lt;/li&gt;
&lt;li&gt;Parse the report into individual line items&lt;/li&gt;
&lt;li&gt;And write them to a datastore or platform for analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My initial diagram looked roughly like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-1%2520diagram.png%3Ft%3D1521502962558" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-1%2520diagram.png%3Ft%3D1521502962558" alt="ACM-1 diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figuring out what the analysis platform would be was hard to settle, but I just wanted to stop staring at a CSV file so I could start understanding the data I had. I ended up settling on S3 and AWS Athena as a start, and knowing I would probably move on from that, I used an SNS topic to separate the function that reads the billing report from the function that writes to S3. That would mean when I wanted to move onto a new analysis platform, I wouldn’t have to refactor the function that reads the billing report. What I had now was this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-2%2520diagram.png%3Ft%3D1521502962558" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-2%2520diagram.png%3Ft%3D1521502962558" alt="ACM-2 diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking at this diagram, I realized I needed to break the application down into smaller, independent pieces. What if a user actually found Athena useful for their needs? What if they preferred to use something different than what I settled on? I didn’t want to maintain multiple applications that were all half the same code or watch a proliferation of forks if others found this interesting. There had to be a way to make the core valuable code reusable while other code code be swapped and replaced for personal preference.&lt;/p&gt;

&lt;p&gt;This is when I made the decision to release my application as independent pieces to SAR. It honestly look me awhile to make this decision too. It felt weird&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking My App Into Nanoservices
&lt;/h2&gt;

&lt;p&gt;I went ahead and broke down ApplicationCostMonitoring into smaller pieces. What I now had were two services: one that was trigger by S3 events and would retrieve a billing report, parse, and publish line items, and a service that would write a line item to S3 so it was searchable with Athena.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-3%2520diagram.png%3Ft%3D1521502962558" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-3%2520diagram.png%3Ft%3D1521502962558" alt="ACM-3 diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ApplicationCostMonitoring&lt;/em&gt; is not usable on its own. But it is useful, deployable, and reusable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“So what, I could have written this in an hour?”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No, you couldn’t. The service contains knowledge learned by studying the AWS Cost and Usage report, experimenting and testing. Much of what I learned isn’t documented by AWS. Something I discovered purely by accident. While the code of the service may be simple, it is the product of studying and understanding a particular problem area. This is the unseen effort and labor that is often hard for us to reflect in our engineering.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ApplicationCostMonitoring&lt;/em&gt; contains domain logic that understands the idiosyncrasies of the AWS Cost and Usage report. The Cost and Usage reports produced are cumulative within a billing period, and due to the size of the report, &lt;em&gt;ApplicationCostMonitoring&lt;/em&gt; keeps track of what records have already been published so that the report can run in both a timely and less costly manner. The service is also aware that report line items aren’t sorted entirely chronologically so it uses latest date seen instead of report position to figure out where to pick back up on the next run. It also knows to always reprocess the items from the first of the month because those items can and do change throughout the month.&lt;/p&gt;

&lt;p&gt;The service also lets you handle report schema changes. These occur through making changes to the tags tracked in the report. These schema changes can cause issues for example with Athena and render your data unsearchable. Additionally, changing the tags and ultimately the schema will cause new line item IDs to be generated for some, but not all, charges in the report. All this is documented and you can decide to configure the service to run in the manner that best fits your needs.&lt;/p&gt;

&lt;p&gt;This service does not have value because of the code, but because of its ability to solve a problem and the knowledge of the problem space that is embodied in the code. Someone does not need to go through what I’ve already gone through to properly analyze their AWS Cost and Usage report; they can use my work. This is where we are trending in the building of applications and systems, reusable domain logic to solve problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does AppRepo Facilitate Nanoservices and Serverless?
&lt;/h2&gt;

&lt;p&gt;The general availability of AWS Serverless Application Repository represents a significant moment for the advancement of using nanoservices. It starts to help solve one of the biggest problems in this area: discoverability of nanoservices. How does someone know that &lt;em&gt;ApplicationCostMonitoring&lt;/em&gt; exists, and has solved a problem of theirs already? This is why I published &lt;em&gt;ApplictionCostMonitoring&lt;/em&gt; to SAR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-SAR.png%3Ft%3D1521502962558" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-SAR.png%3Ft%3D1521502962558" alt="ACM-SAR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having published &lt;em&gt;ApplicationCostMonitoring&lt;/em&gt; and multiple publisher nanoservices to SAR, others can now find them ,,and compose their own system for analyzing their AWS spend. People can use them to compose an application that fits their needs. Even more exciting is they can use my nanoservices to build applications I never thought of or even imagined, and they’re free to focus more time and energy on the things I haven’t imagined instead of solving the problems I’ve already solved. Not reinventing the wheel, if you will.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ApplicationCostMonitoring with S3 publisher and Athena&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-5%2520diagram.png%3Ft%3D1521502962558" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-5%2520diagram.png%3Ft%3D1521502962558" alt="ACM-5 diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ApplicationCostMonitoring with DynamoDB powering a web application.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-7%2520diagram.png%3Ft%3D1521502962558" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FACM-7%2520diagram.png%3Ft%3D1521502962558" alt="ACM-7 diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m not the only person seeing the power of nanoservices, either. See AWS developer James Hood’s &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:077246666028:applications~aws-serverless-twitter-event-source" rel="noopener noreferrer"&gt;aws-serverless-twitter-event-source&lt;/a&gt; and this recent &lt;a href="https://www.twitch.tv/videos/239114858" rel="noopener noreferrer"&gt;Twitch broadcast on publishing and deploying serverless apps&lt;/a&gt;. In the coming weeks, I plan to look at using &lt;em&gt;aws-serverless-twitter-event-source&lt;/em&gt; to build a Twitter bot. This is a radically different application than his leaderboard application, and I am free to spend more of my time focusing on what differentiates my application from his than figuring out how to work with the Twitter search API.&lt;/p&gt;

&lt;p&gt;This is an exciting time; we are at the precipice of seeing a new way to build applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“One of the problems with the old object oriented world was there was no effective communication mechanism to expose what had been built. You’d often find duplication of objects and functions within a single company let alone between companies. Again, exposing as web services encourages this to change.&lt;/em&gt; &lt;strong&gt;That assumes someone has the sense to build a discovery mechanism such as a service register.&lt;/strong&gt; &lt;em&gt;” -&lt;/em&gt; &lt;a href="https://hackernoon.com/why-the-fuss-about-serverless-4370b1596da0" rel="noopener noreferrer"&gt;Simon Wardley&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Composing Serverless Applications Tomorrow
&lt;/h2&gt;

&lt;p&gt;Building applications by composing reusable nanoservices of useful domain logic is new, and we’re a long way off from this being mainstream. Nanoservices themselves are also limited by the tools we currently have available. We have tools for building serverless applications and tools for building serverless nanoservices. We do not have the tools yet for building applications from nanoservices. SAR is a start, but we still lack good dependency management, tools that can show us the relationships between nanoservices and all the components of an application as well as tools to track changes to dependencies and prevent us from deploying breaking changes unknowingly. SAR also currently has its own limitations. It supports only a &lt;a href="https://docs.aws.amazon.com/serverlessrepo/latest/devguide/using-aws-sam.html" rel="noopener noreferrer"&gt;subset of AWS resources&lt;/a&gt; and the repository is still small.&lt;/p&gt;

&lt;p&gt;However, there are people actively interested in composing applications with nanoservices, and working to make it happen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FSAR-Slack-Comment.png%3Ft%3D1521502962558" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.serverlessops.io%2Fhubfs%2Fblog%2FSAR-Slack-Comment.png%3Ft%3D1521502962558" alt="SAR-Slack-Comment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we’re seeing right now is the beginning of a dramatic shift in how we will be building applications for many years to come.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This originally appeared on the &lt;a href="https://www.serverlessops.io/blog" rel="noopener noreferrer"&gt;ServerlessOps blog&lt;/a&gt;. Visit to read more of our work!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftrack.hubspot.com%2F__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Frise-of-the-nanoservice%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftrack.hubspot.com%2F__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Frise-of-the-nanoservice%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>nanoservices</category>
      <category>programming</category>
      <category>aws</category>
    </item>
    <item>
      <title>Serverless Ops: What do we do when the server goes away?</title>
      <dc:creator>Tom McLaughlin</dc:creator>
      <pubDate>Tue, 20 Feb 2018 11:45:00 +0000</pubDate>
      <link>https://dev.to/tmclaughbos/serverless-ops-what-do-we-do-when-the-server-goes-away-1mln</link>
      <guid>https://dev.to/tmclaughbos/serverless-ops-what-do-we-do-when-the-server-goes-away-1mln</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k2cgph1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/twitter-server-trash.png%3Fnoresize%26t%3D1519097594657%26width%3D640%26name%3Dtwitter-server-trash.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k2cgph1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/twitter-server-trash.png%3Fnoresize%26t%3D1519097594657%26width%3D640%26name%3Dtwitter-server-trash.png" alt="twitter-server-trash.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I originally wrote this piece for my &lt;a href="https://www.serverlessops.io/blog"&gt;company blog&lt;/a&gt;. It's more operations than developer oriented. However, the care and maintenance of code is just as important as the writing of code. If you're going to go serverless, these are the operational aspects to think about. If you're a developer, then this is an overview of the complexity of serverless operations. If you're in operations, as I know some are on here, then this is an attempt to define a future job role.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When I began to use AWS Lambda and built my first serverless service, I was excited by the speed of delivery I could achieve. With these striking benefits, I knew others would be excited, as well. The result? Less worrying about provisioning capacity for my service, and more time spent on the actual features of that service. That excitement also led me to question my future role as an operations or DevOps engineer. What would I do if there were no servers to manage? What would I do each day? How would I explain my job? How would I explain my value to my current employer or a new prospective one?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What would I do when the server goes away?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now is the right time to start discussing operations in a serverless world. If we don’t, then it will be defined for us. On one end currently, there are people proposing NoOps -- the transfer of operational responsibilities to software engineers. On the other end are people who believe the operations team will change, that they will always be needed.&lt;/p&gt;

&lt;p&gt;The truth of the future lies somewhere in between these two approaches The complexity of production operations is changing shape and expanding. As traditional problems in operations become abstracted away by serverless computing, the makeup of teams and organizations operating SaaS products will change. Yet, many problems still exist: proper architectural design, observability, deployment, security, and more.&lt;/p&gt;

&lt;p&gt;To say it bluntly: Teams without operational expertise are at risk.&lt;/p&gt;

&lt;p&gt;Operations never goes away; it just evolves in meaning and practice. The principles of operations and operations engineers still have tremendous value. However, we as a community need to be the ones who define our new role.&lt;/p&gt;

&lt;p&gt;With that in mind, here’s what this post will cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exploration of operational responsibilities in a world where so many layers of the stack are abstracted away&lt;/li&gt;
&lt;li&gt;A proposal for an organizational role to fill these new responsibilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post is a start at defining this new operations role, but keep in mind that it’s not the only way to define the ops role under a serverless architecture. I tend to work at SaaS startup companies, and think of operations in that context. This is the start of a conversation. People will disagree with my observations, ideas, and conclusions, and that is good. Those disagreements will help us find the right answers for different people and different organizations.&lt;/p&gt;

&lt;p&gt;Here’s how I see the future of operations in a serverless infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is “Operations”?
&lt;/h2&gt;

&lt;p&gt;Let’s start with a common definition. I find people understand operations differently, each correct in their own way. The choice of definition just signals a person’s point of view. The end result is people talk past one another about different things.&lt;/p&gt;

&lt;p&gt;At the highest level, operations is keeping the tech to run your business going. But beneath that, the term “operations” can be used in several different ways. People tend to conflate these meanings because for the longest time, they were tightly coupled.&lt;/p&gt;

&lt;p&gt;Operations is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A team&lt;/li&gt;
&lt;li&gt;A role&lt;/li&gt;
&lt;li&gt;A responsibility&lt;/li&gt;
&lt;li&gt;A set of tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traditionally, the responsibility of operations was held by the operations engineer on the operations team. That has changed significantly over the past several years with the introduction of DevOps. We tore down the silos, and without that rigid structure around us to keep those meanings coupled, they started to break apart from one another.&lt;/p&gt;

&lt;p&gt;Developers started to assume some of the responsibilities of operations when they were no longer able to toss code over the wall to the operations team. The classic example of this division of operations responsibility is when someone says, “developers get alerts for their services first.”&lt;/p&gt;

&lt;p&gt;On the other end of the definition, some organizations introduced cross-functional teams composed of engineers who possessed operational and software development skills. These organizations do not have an ops team, and they don’t plan on having one.&lt;/p&gt;

&lt;p&gt;In the middle, the role itself has varied heavily. In some organizations, the role has varied little much more than adding automation understanding like Puppet, Chef, and Ansible to the skill set. Some teams have approached automation as a means of augmenting traditional operations responsibilities while others have seen operations engineers trend much closer to being a developer, developing in the configuration management and other tooling.&lt;/p&gt;

&lt;p&gt;So what does serverless hold for these definitions of operations?&lt;/p&gt;

&lt;h2&gt;
  
  
  What will operations be with serverless?
&lt;/h2&gt;

&lt;p&gt;What I see for operations in a serverless world is two-fold:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The operations team goes away.&lt;/li&gt;
&lt;li&gt;The operations role reassumes primary responsibility of the operations role back from developers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Serverless operations is not NoOps. It is not anti-DevOps and a return to silos. With the dissolution of traditional ops teams, these engineers will need new homes. Those homes will be individual development teams, and these development teams will be product pods or feature teams. This new approach will lead the rise of cross-functional teams; a goal so many of us have talked about but have failed to achieve.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Product Pod: Cross Function in Action
&lt;/h3&gt;

&lt;p&gt;Many organizations have already adopted product pods and feature teams. These are typically cross functional multidisciplinary teams that exist to solve a particular problem or set of problems. These teams thrive through the collaboration between team members of different skill sets, experiences, and perspectives.&lt;/p&gt;

&lt;p&gt;What does a product pod look like in practice? In the organizations I’ve been in, they’ve typically consisted of the following roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product manager&lt;/li&gt;
&lt;li&gt;Engineering lead&lt;/li&gt;
&lt;li&gt;Backend developer (1-2)&lt;/li&gt;
&lt;li&gt;Frontend developer (1-2)&lt;/li&gt;
&lt;li&gt;Product designer and/or UX researcher (sometimes one person)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The product manager is the person on the team who is responsible for owning and representing the needs of the business. The PM’s job is to understand the business needs and turn them into actionable ideas. Similarly, you’ll have a product designer or UX researcher that will work with the PM to turn these ideas into full-fledged designs and prototypes. The tech lead, in turn, is responsible for estimating the amount of technical work involved and leading the engineering effort. From there, you have a number of backend and frontend engineers as appropriate.&lt;/p&gt;

&lt;p&gt;The end result is a small team focused on using their differing skills to solve a business problem. The cross-functional set of skills on the team leads to a stronger team and better systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our Role in the Product Pod
&lt;/h3&gt;

&lt;p&gt;As the member of a product pod, what will the role of the operations person be? Overall, they will be the person responsible the the holistic health of the systems or services provided by the team. The operations person will act as a utility player. They will be good at their primary role ensuring the reliability of the team’s services. They will also be good enough at other roles on the team to augment, offload, or fill-in in a limited capacity when necessary.&lt;/p&gt;

&lt;p&gt;As a member of the product pod, the operations engineer will reassume the primary responsibilities for the reliability and performance of the system as a whole. That doesn’t mean they’re be the first one paged everytime. It means that they will be the domain expert in those&lt;/p&gt;

&lt;p&gt;While software developers focus on the individual components, the operations engineer will be responsible for the system as a whole. They’ll take a holistic approach to ensuring that the entire system is running reliably and functioning correctly. This in turn will free up software developers to spend more time on feature development and less time on operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ops Skillset
&lt;/h2&gt;

&lt;p&gt;So, what are the skills the person in the ops role will require in order to be effective? Let’s break this down into two sets: The skills ops currently possess that underpin the value they provide, and the skills many operations people will need to level up or possibly begin acquiring.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Skills We Bring
&lt;/h3&gt;

&lt;p&gt;Let’s run through the set of skills that an operations expertise brings to a team. These skills have historically been the primary set for an operations engineers; these are the focus of their expertise..&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Systems engineering&lt;/li&gt;
&lt;li&gt;Platform/tooling understanding&lt;/li&gt;
&lt;li&gt;People skills (important for all roles)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our systems engineering skills involve designing, building, operating, scaling, and debugging systems. Our role has required us to have an overall understanding of the systems we’re responsible for. This means spotting weaknesses, potential areas of improvement, and understanding how changes may affect the overall reliability and performance of a system.&lt;/p&gt;

&lt;p&gt;All engineers develop expertise in the platforms and tooling they regularly use. A frontend engineer develops expertise in JavaScript and frameworks. (Or invents their own...) A backend engineer develops expertise in building APIs and communicating between distributed backend systems. As operations engineers, we develop expertise in tooling and platforms. In an AWS environment, we hone expertise in tools like CloudFormation and Terraform, which have a steep initial learning curve. We also have expertise in aspects of AWS -- like understanding the idiosyncrasies of the different services: for example, Lambda cold starts and how its CPU scaling is tied to function memory allocation.&lt;/p&gt;

&lt;p&gt;Finally, there’s our people skills. Let’s stop calling them soft skills while we’re here. People are hard, where as computers are fairly deterministic in their behavior. People, however, are not. Much of the work of operations engineers is taking requests from people, and figuring out the actual problem a person is trying to solve. Good operations teams are service teams that understand the needs of the people they are trying to serve. These skills serve to enhance the relationship the technical team will have with a product manager.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Skills We Need
&lt;/h3&gt;

&lt;p&gt;Here’s the skills we’ll need to level up on or even still acquire. The evolution in different directions of the operations role has resulted in an uneven distribution of these skills.&lt;/p&gt;

&lt;h4&gt;
  
  
  Coding
&lt;/h4&gt;

&lt;p&gt;There is absolutely no doubt about it, the operations person is going to need to learn to code. On top of that, they’re going to need to learn the chosen language(s) of their team. Whether it’s Python (yea!), JavaScript (well, okay.), Java (ummm...), or Erlang (really???), the operations person will need to be proficient in the language.&lt;/p&gt;

&lt;p&gt;Remember the ops person is a utility player. They’re not there to be a developer. Being proficient in the language means being able to read it, code light features and fix light bugs, and doing some code review. If you’re quizzing the person on fizz-buzz or asking them to do something with b-tree, you’re probably doing it wrong. However, if you’re asking them to code a basic function, or read a block of code, explain it, and show points of potential system failure, congrats! You’re doing it right.&lt;/p&gt;

&lt;p&gt;On the positive side for the code-phobic, nanoservices are much simpler than monoliths or even microservices. Rather than snaking your way through a large code base, you should be able to isolate your issue in many cases down to a single function, and therefore just a small subset of code to work with. This should make understanding the code easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ops Role Responsibilities
&lt;/h2&gt;

&lt;p&gt;In a serverless environment, once an operations engineer has joined a product pod, then what do they do each day? What will their job description be?&lt;/p&gt;

&lt;p&gt;We need to be able to articulate a set of responsibilities to our organization; to explain the role, because without a job description, you have no job. And, if you can’t explain it, then someone else will… And that possibly means your job will be explained away because of “NoOps”.&lt;/p&gt;

&lt;p&gt;Much of the ops role will be reclaiming past responsibilities that developers started to own. Again, the ops person won’t be the sole person responsible for these areas, but they will be the primary owner and tasked with enabling their fellow teammates to meet these challenges where appropriate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Systems Standards, and Best Practices
&lt;/h3&gt;

&lt;p&gt;Operations engineers will take the responsibility of establishing the systems standards, and best practices for building reliable and performant serverless systems. We should be defining the correct patterns to use (and when to use them), and ensuring that these are adhered to.&lt;/p&gt;

&lt;p&gt;Take a simple task like passing an event between parts of a serverless system. The average person will do what they know, and and stick to what they did last time. But even such a simple task can be solved in a variety of ways on AWS. What are the pros and cons of SNS over SQS? What about a Lambda fanout function? Perhaps the event should be written to persistent storage like S3 or DynamoDB and events from those actions be used to trigger more work actions?&lt;/p&gt;

&lt;p&gt;What’s the correct choice? That’s going to depend on the problem you’re trying to solve and the constraints. (For the record, I typically hate Lambda fanout functions because they require more error handling in the fanout function to not lose events rather than just letting the system’s design handle resiliency.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Build, Deploy, and Management Tooling
&lt;/h3&gt;

&lt;p&gt;What is one of the major obstacles we all face as engineers? Understanding our tooling. Our tooling is often significantly complex, but we learn as much as we really need to know to get the job done.&lt;/p&gt;

&lt;p&gt;Tooling like Serverless Framework and CloudFormation can be complex. I find many developers are not exactly fans of them. Your dev teammates WILL hardcode stuff and thwart a lot of the problem solving CFN has built in. CloudFormation can be very flexible if you know how to properly use it. Serverless Framework improves greatly on CloudFormation with its plugin capabilities. Writing Serverless Framework plugins to solve organizational issues will replace writing Puppet facts and Chef &lt;code&gt;knife&lt;/code&gt; plugins.&lt;/p&gt;

&lt;p&gt;There will also be testing and deployment work to be done. Start with your standard testing frameworks for Python, Node, or Go. But wait, there’s more! What about load testing? If your service’s compute layer can rapidly scale, what does that mean for your data layer? Can your DynamoDB table keep up with the Lambda’s writing to it, and what happens to those failed writes due to write throttling? Can your SQS consumers keep up with the queue? Implementing tools like &lt;em&gt;Artillery&lt;/em&gt; to find scaling issues, that an Ops person will then take on solving, will become a part of the role.&lt;/p&gt;

&lt;p&gt;Once all tests are passed, it’s time to roll out new code. Eventually, organizations will want to blue/green deployment systems, feature flags, and more to ensure the graceful rollout of code to production users. Someone will need to build that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling and Performance Tuning Systems
&lt;/h3&gt;

&lt;p&gt;I grouped these two together because they’re both a part of modifying an already running system and because they can be boiled down to meeting newly established system metric objectives. Scaling a system to go from servicing an order of magnitude more events is the same class of work as trying to reduce response times by a certain number of milliseconds. There is an established baseline, a target objective, and system to be modified to meet the objective.&lt;/p&gt;

&lt;p&gt;The work of scaling systems narrows with serverless. The individual system components scale for you on their own. You aren’t responsible for scaling your messaging, queuing, storage, or database layers. So what’s left to tune and how do you scale?&lt;/p&gt;

&lt;p&gt;Lambdas can have their amount of amount of memory increased, which also increases the amount of CPU available to a Lambda. Instead of creating new compute instances and rolling them in while you roll out the old ones, changing memory is a simple click or stack deployment with Serverless Framework or CloudFormation. Additionally, you’ll also be looking at replacing components of a system with different services. If SNS is too slow, then perhaps it‘s time to evaluate Kinesis? Or maybe the system as a whole and its event pathways need rethinking?&lt;/p&gt;

&lt;p&gt;There will be code changes to evaluate, from fixing inefficient functions to looking at replacing individual functions with ones written in a different language. At a certain point in scaling Lambda memory, an additional CPU core becomes available. Perhaps a function should be rewritten in a language that can utilize that extra core efficiently? (Isn’t the ability to rewrite a single nanoservice function pretty cool compared to rewriting an entire microservice!?!?)&lt;/p&gt;

&lt;p&gt;But with all this ability to easily scale creates new potential issues. Can the downstream services in your system handle the new scale or performance? Someone will need to look at the system as a whole and understand how upstream changes will play downstream.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reliability of Systems
&lt;/h3&gt;

&lt;p&gt;We’ve been handling the monitoring, metrics, logging, and observability of our systems, and will continue to do so. The issues we’ll be looking for will change. Rather than host CPU and memory utilization, we’ll be looking for function execution length of time, cold starts, and function out of memory exceptions. The ops person will be responsible for selecting and instrumenting the proper tooling for the team, and solving the issues that surface. If a function has started to throw out of memory exceptions, the ops person should be responsible for resolving the issue by increasinging function memory (the easy but more expensive way), or through refactoring the given function.&lt;/p&gt;

&lt;p&gt;Additionally, how many of the metrics and logging solutions we use designed for microservices, or even monoliths, are appropriate for nanoservices? Nanoservices are functions, and we’re told the best functions do a single thing. So why is your function responsible for doing its named function? ... And writing metrics, and writing logs, and writing errors? Sure, there’s CloudWatch, but its delay isn’t short enough for many teams. Maybe we need to rethink how we deliver those to other platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding and Code Review
&lt;/h3&gt;

&lt;p&gt;There’s no getting around that operations engineers will need to code, and be involved in other areas of the coding process. What this looks like in practice will vary by team and need, of course, but there should be some established minimums.&lt;/p&gt;

&lt;p&gt;The team’s operations engineer should be proficient in the coding language(s) of the team. They should be able to fix light bugs. In the process of investigating error logs while handling their reliability role, instead of just filing a ticket for a software engineer, an operations engineer should be able to look at the problematic code area and fix low hanging fruit. If they can’t, then they should be able to write up a detailed ticket explaining the problem and where they think the 3code goes wrong.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“This should be wrapped in a try/except block to handle API failures. Let me add that.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ideally, we should also eventually level up to be able to code simpler tasks and features. I add this because it improves our skills and keeps them sharp. It also has practical value when a team needs to ship. Work that could be punted down to a more junior engineer (or even an intern) can be handed to the operations engineer to augment the output of developers.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“I can put up an REST endpoint that will ingest webhook data, parse it, perform a few web requests to enhance the data, and publish it to another system.” &lt;/em&gt;&lt;em&gt;(&lt;/em&gt;&lt;a href="https://twitter.com/alicegoldfuss/status/932079153582452736"&gt;Inspiration from Alice Goldfuss&lt;/a&gt;&lt;em&gt;.)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, there’s code review. We should be bringing our unique knowledge and perspective of how the platforms we use work, and also think about the code as a part of the greater system.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“When you add a record to Route53, the action is queued. You need to use the callback and check that the operation was successful. Here’s what you need to do.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“This function should have retries because we may fail at the end here, but you can’t safely retry the function because we’ll end up with duplicate DynamoDB records due to this spot earlier.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One area of friction will be developers incorrectly assessing the value and skills of an operations engineer solely on their coding abilities. The operations engineer is only a part time developer, and they should be treated as such. If a team requires a senior developer for senior developer work, then give that work to a senior developer. Developers on the team will need to recognize the value of their operations engineer and the expertise they bring. If not, the operations engineer will be setup to fail, and the team as a whole will fail to achieve what other more cohesive teams can.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;The good news on the topic of security: ops can stop focusing its time and effort on the hosts the code runs on. That’s the responsibility of AWS now. Think about the recent Meltdown and Spectre attacks as an example, and the time people spent rolling out, rolling back, and rolling out patches again. Instead of patching systems, an ops person would continue ensuring that performance and reliability of services remains within acceptable parameters as AWS performed their work. That means monitoring for acceptable rate of Lambda invocation failures, and then execution times remain acceptable. This is, of course, work already being done as a part of standard reliability responsibilities. This is effectively freeing up time.&lt;/p&gt;

&lt;p&gt;However, that time is going to be filled up quickly. The time saved will be spent on tasks that we previously didn’t give enough time to. Have you audited your S3 buckets? Do you have any that are publicly accessible that shouldn’t be? Do your Lambda functions have IAM roles with least privilege access? We can spend more time securing out non-EC2 resources.&lt;/p&gt;

&lt;p&gt;With this additional time, ops can also travel up the stack and into AppSec. The lowest hanging fruit: are application dependencies up to date &lt;em&gt;and&lt;/em&gt; free of vulnerabilities? Currently, ops spends time patching hosts, but often not enough time ensuring application dependencies are up to date. Ops will work on tasks like secrets management, and ensuring that secrets are properly stored and rotated.&lt;/p&gt;

&lt;p&gt;These tasks are merely examples; that’s not to say that they aren’t currently being handled. They are just handled at varying levels across many organizations.&lt;/p&gt;

&lt;h3&gt;
  
  
  System Cost Management
&lt;/h3&gt;

&lt;p&gt;Serverless brings some new and interesting challenges to cost around the pay per use model.&lt;/p&gt;

&lt;p&gt;First, there’s the technical challenge of preventing runaway Lambda functions. Recursive functions are a thing, and a valid design choice for particular workloads. But anytime you see one, you should ensure that the loop will exit or you may end up with some surprises in your bill. (Ouch! This recently happened to me... )&lt;/p&gt;

&lt;p&gt;The issue isn’t just code, either. A malformed message in an SQS queue can lead to a queue that never drains, and a function that continuously executes. Knowing your systems means knowing how and when to alert on unusual function activity. If you’re processing a queue in periodic message dumps, then your consumers should exhibit a certain behavior -- like not executing for greater than a certain period of time. If they’re not following that pattern, then maybe there’s something to investigate?&lt;/p&gt;

&lt;p&gt;There’s also the difficulty of determining when a system becomes more suitable from a cost perspective to run in a container or even on an EC2 instance. The pay per use model of Lambda comes at a premium on actual invocations.&lt;/p&gt;

&lt;p&gt;Finally, serverless gives you a unique ability to measure past the system level, and down to the feature level. I like to call this “application dollar monitoring”. If you track cost per function invocation and overall system cost over time, and then overlay feature deploys events, you have the ability to understand system cost changes at a much finer level. Potentially down the road, an organization may even be able to tie features to revenue and be able to make smarter decisions about feature value and feature work. Admittedly, right now you’re talking about small dollar amounts, and the cost saving over EC2 may already be enough to not care at the feature level. However, in the future as we become accustomed to the low costs of serverless, we will begin to care more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There’s a lot in this post that’s assumed, unexplained, and undefended. Over the coming months, we’ll be expanding on the ideas and themes here. This piece has hopefully generated questions from people that will require further answering. Finally, what the ideas in this piece look like in practice and how to get there requires practical explanations.&lt;/p&gt;

&lt;p&gt;Have thoughts on what you’ve just read? &lt;a href="https://twitter.com/tmclaughbos"&gt;Find me on Twitter&lt;/a&gt; and let’s have a conversation. We’re also happy to publish your own ideas about the future of operations with serverless and your ideas don’t have to agree with ours! Just drop as a line at &lt;a href="//mailto:hello@serverlessops.io"&gt;hello@serverlessops.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Keep coming back to learn more!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A thank you goes out to several people who contributed feedback while I was writing this. They were&lt;/em&gt; &lt;a href="https://twitter.com/brianhatfield"&gt;Brian Hatfield&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://twitter.com/hcoyote"&gt;Travis Campbell&lt;/a&gt; &lt;em&gt;and others in #atxdevops on&lt;/em&gt; &lt;a href="https://twitter.com/hangops"&gt;HangOps&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://twitter.com/ryan_sb"&gt;Ryan Scott Brown&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://twitter.com/ikB3N"&gt;Ben Bridts&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://twitter.com/robpark"&gt;Rob Park&lt;/a&gt;&lt;em&gt;, and&lt;/em&gt; &lt;a href="https://twitter.com/gwennasaurus"&gt;Gwen Betts&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This originally appeared on the &lt;a href="https://www.serverlessops.io/blog"&gt;ServerlessOps blog&lt;/a&gt;. Visit to read more of our work!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uz8tgiS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://track.hubspot.com/__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fserverless-ops-what-do-we-do-when-the-server-goes-away%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uz8tgiS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://track.hubspot.com/__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fserverless-ops-what-do-we-do-when-the-server-goes-away%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>operations</category>
      <category>community</category>
      <category>devops</category>
    </item>
    <item>
      <title>Static Websites on AWS S3 with Serverless Framework</title>
      <dc:creator>Tom McLaughlin</dc:creator>
      <pubDate>Tue, 13 Feb 2018 12:30:00 +0000</pubDate>
      <link>https://dev.to/tmclaughbos/static-websites-on-aws-s3-with-serverless-framework-21b7</link>
      <guid>https://dev.to/tmclaughbos/static-websites-on-aws-s3-with-serverless-framework-21b7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AtJrb0JO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/WWW%2BS3.png%3Fnoresize" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AtJrb0JO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/WWW%2BS3.png%3Fnoresize" alt="WWW+S3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While looking for a project to work on, I noticed there’s a very simple serverless pattern that I don’t see written about much: website hosting. We often immediately think of Lambda with AWS serverless, but it is more than just functions-as-a-service (FaaS). A simple use case like hosting a static website can be done without need for EC2, and it can be managed using &lt;a href="https://serverless.com/framework/"&gt;Serverless Framework&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s walk through how to deploy a static website using Serverless Framework. Even dynamic websites often have static assets, and the information below should be useful to anyone building websites on AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using S3 to Host a Static Website
&lt;/h2&gt;

&lt;p&gt;If you’re building an internal company website, website prototype, or simply one you don’t intend to see much traffic, S3 may be the right choice for you. Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 provides a simple and convenient method for hosting a static website.&lt;/li&gt;
&lt;li&gt;S3 provides an easy and &lt;a href="https://medium.com/@bezdelev/cost-breakdown-for-a-static-website-on-aws-after-18-months-in-production-d97a932d2d25"&gt;cheaper solution&lt;/a&gt; for smaller scale sites.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;S3 hosting has its limitations though. For high traffic sites with distributed viewership, CloudFront is better suited for static web hosting. In addition, if you require SSL and your own domain name, then you will need CloudFront. However, based on your needs and requirements, you may not need the added complexity and cost. We’ll address adding CloudFront in a later post.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We’re Building
&lt;/h2&gt;

&lt;p&gt;We’ll deploy a website consisting of a static HTML page with sound and graphics. If you’ve been on the internet for a long time, you will enjoy the site to be deployed. If this site is new to you, then welcome to &lt;a href="https://serverless-zombo.com/"&gt;Zombo.Com&lt;/a&gt;! (The original site was written in Flash and this is based off of the &lt;a href="https://github.com/bertrandom/HTML5-Zombocom"&gt;HTML5 port port&lt;/a&gt; of the original.) It’s straightforward, fun, and proof that you can do anything with serverless.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It’s Built
&lt;/h2&gt;

&lt;p&gt;Now, let’s walk through how to build this project! The project will need the following resources created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 Bucket&lt;/li&gt;
&lt;li&gt;S3 Bucket Policy&lt;/li&gt;
&lt;li&gt;Route53 Resource Record&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll assume that a Route53 Zone resource already exists in the environment. Depending on your needs, you may create a zone in your own project. Since zones are typically shared resources, I chose to create the zone separate from the serverless-zombocom project.&lt;/p&gt;

&lt;p&gt;The project looks roughly like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SBsG5COv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/Serverless%2520Zombo%2520S3%2520Diagram.png%3Ft%3D1518484684999%26width%3D686%26height%3D313%26name%3DServerless%2520Zombo%2520S3%2520Diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SBsG5COv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/Serverless%2520Zombo%2520S3%2520Diagram.png%3Ft%3D1518484684999%26width%3D686%26height%3D313%26name%3DServerless%2520Zombo%2520S3%2520Diagram.png" alt="Serverless Zombo S3 Diagram.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless Framework Setup
&lt;/h3&gt;

&lt;p&gt;Our project is deployed and managed with Serverless Framework. One of the advantages it has over other serverless management frameworks is its robust plugin ecosystem (which we’ll get into soon). We’ll need &lt;a href="https://nodejs.org/en/download/"&gt;Node.JS and NPM&lt;/a&gt; installed in order to use it. Once those are installed, we can install Serverless Framework doing the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Serverless Framework installed, we can go ahead and create our project. The &lt;code&gt;serverless&lt;/code&gt; command can create projects of different types using templates. However, since we’re not using Lambda, what we choose doesn’t really matter. The &lt;em&gt;hello-world&lt;/em&gt; template will be good enough.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless create -t hello-world -n serverless-zombocom -p serverless-zombo.com
cd serverless-zombocom
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What will be created is a &lt;em&gt;serverless.yml&lt;/em&gt; template file, a basic handler script (which we’ll discard), and a .gitignore.&lt;/p&gt;

&lt;p&gt;Serverless Framework doesn’t handle uploading files to an S3 bucket natively, but that’s where its plugin system really shines. You’re not limited to Serverless Framework’s existing functionality. Somebody else thought having the ability to upload files to an S3 bucket would be useful, as well. We’ll use the the &lt;a href="https://github.com/k1LoW/serverless-s3-sync"&gt;serverless-s3-sync&lt;/a&gt; plugin; this lets us define a local directory of files to upload and a Bucket and optional prefix to upload them to.&lt;/p&gt;

&lt;p&gt;Install the plugin by running the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless plugin install -n serverless-s3-sync
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that our project is setup with Serverless Framework, let’s move on to configuring the project’s resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Layout
&lt;/h3&gt;

&lt;p&gt;Let’s take a brief look at our project’s layout now. We’ve removed the &lt;em&gt;handler.js&lt;/em&gt; file since we won’t be needing it. You can pretty much ignore &lt;em&gt;package.json&lt;/em&gt;, &lt;em&gt;package-lock.json and&lt;/em&gt; node_modules. They’re a result of NPM package management. The README.md is where we can provide people some quick documentation about this project.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;static/&lt;/em&gt; directory has been added, and the site static assets have been placed there. Any files in the &lt;em&gt;static/&lt;/em&gt; directory are what will be uploaded to S3. In that directory, we have all the glory that is zombo.com&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;serverless.yml&lt;/em&gt; file is where we’ll configure the service. We’ll come back to this file since it is what drives our project.&lt;/p&gt;

&lt;p&gt;Here’s what the directory structure looks like -- The contents of &lt;em&gt;node_modules/&lt;/em&gt; has been snipped for clarity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless-zombo.com
├── README.md
├── node_modules
│ └── &amp;lt;SNIP&amp;gt;
├── package-lock.json
├── package.json
├── serverless.yml
└── static
├── favicon.ico
├── index.html
├── zombo.mp3
├── zombo.ogg
└── zombocom.png
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Project Resources
&lt;/h3&gt;

&lt;p&gt;We’ll now start configuring our service using the &lt;em&gt;serverless.yml&lt;/em&gt;. I’ve gone ahead and cleaned up the file a little for us to start. This is what &lt;em&gt;serverless.yml&lt;/em&gt; looks like now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-zombo.com"&gt;serverless.yml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service: serverless-zombocom
plugins:
  - serverless-s3-sync

custom:

provider:
  name: aws
  runtime: nodejs6.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have no functions so I’ve removed that section. I’ve moved the plugins section towards the top. I’ve also gone ahead and set the provider to AWS. Note: I’ve left the runtime unchanged because it needs to be defined even though it’s not relevant to our project.&lt;/p&gt;

&lt;p&gt;Notice the empty &lt;em&gt;custom&lt;/em&gt; section. The custom section in &lt;em&gt;serverless.yml&lt;/em&gt; let’s you define configuration and variables that will be reused elsewhere in your template. In that section later on, we’ll add things like S3 file sync configuration and the DNS record information for the site.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS S3 Bucket And Bucket Policy
&lt;/h4&gt;

&lt;p&gt;We’ll start by adding our S3 bucket where the static files will reside.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-zombo.com/blob/aws-s3-hosted/serverless.yml#L24-L34"&gt;serverless.yml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resources:
  Resources:
    StaticSite:
      Type: AWS::S3::Bucket
      Properties:
        AccessControl: PublicRead
        BucketName: ${self:custom.siteName}
        WebsiteConfiguration:
          IndexDocument: index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For S3 site hosting, the &lt;em&gt;AccessControl&lt;/em&gt; property must be set to &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-s3.html#scenario-s3-bucket-website"&gt;PublicRead&lt;/a&gt;. The &lt;em&gt;WebsiteConfiguration&lt;/em&gt; is where we define the index document for the site. The value is the same &lt;em&gt;index.html&lt;/em&gt; in the &lt;em&gt;static/&lt;/em&gt; directory. Typically, you don’t need to name a bucket because the CloudFormation generated bucket name will do but that is not the case here. We’re going to create a Route53 Alias record and when doing that, the S3 bucket name and DNS record name need to match. We’ll add a &lt;a href="https://serverless.com/framework/docs/providers/aws/guide/variables/"&gt;serverless variable&lt;/a&gt; as the BucketName and a key to the &lt;em&gt;custom&lt;/em&gt; section to define it like so below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-zombo.com/blob/aws-s3-hosted/serverless.yml#L9-L10"&gt;serverless.yml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;custom:
  siteName: serverless-zombo.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To grant access to the static content, we attach a permissive bucket policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-zombo.com/blob/aws-s3-hosted/serverless.yml#L36-L57"&gt;serverless.yml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;StaticSiteS3BucketPolicy:
      Type: AWS::S3::BucketPolicy
      Properties:
        Bucket:
          Ref: StaticSite
        PolicyDocument:
          Statement:
            - Sid: PublicReadGetObject
              Effect: Allow
              Principal: "*"
              Action:
              - s3:GetObject
              Resource:
                Fn::Join: [
                  "", [
                    "arn:aws:s3:::",
                    {
                      "Ref": "StaticSite"
                    },
                    "/*"
                  ]
                ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bucket policy grants the &lt;em&gt;s3:GetObject&lt;/em&gt; to all principals for any object in the bucket. In the S3 bucket resource, we didn’t provide a bucket name and that’s no problem. Notice under &lt;em&gt;Bucket&lt;/em&gt; and &lt;em&gt;Resource,&lt;/em&gt; we use the CloudFormation &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html"&gt;Ref&lt;/a&gt; intrinsic function to get the name of the bucket that is a part of the stack.&lt;/p&gt;

&lt;p&gt;With the S3 bucket resources added, we’ll add the S3 bucket syncing information. This is done in the &lt;em&gt;custom&lt;/em&gt; section mentioned earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-zombo.com/blob/aws-s3-hosted/serverless.yml#L13-L15"&gt;serverless.yml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;custom:
  siteName: serverless-zombo.com
  s3Sync:
    - bucketName: ${self:custom.siteName}
      localDir: static
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;s3Sync&lt;/em&gt; key takes a list of bucket and directory pairs to sync. The &lt;em&gt;bucketName&lt;/em&gt; key takes the name of the S3 bucket as set bt BucketName that we added earlier. The &lt;em&gt;localDir&lt;/em&gt; key is the relative path to the directory to be synced to S3 which is &lt;em&gt;static/&lt;/em&gt;. Additional configuration options can be found in the plugins &lt;a href="https://github.com/k1LoW/serverless-s3-sync"&gt;documentation&lt;/a&gt;. Now, when the service is deployed, the contents of &lt;em&gt;static/&lt;/em&gt; will be uploaded to S3.&lt;/p&gt;

&lt;p&gt;With the S3 bucket configured, it’s time to setup DNS so people can access the site with an easy to remember domain name.&lt;/p&gt;

&lt;h4&gt;
  
  
  Route53 Record
&lt;/h4&gt;

&lt;p&gt;We’ll now create a Route53 record that will point &lt;em&gt;serverless-zombo.com&lt;/em&gt; to the S3 bucket. We won’t be using a CNAME, though! Route53 does not allow CNAMEs at the apex of a domain, but we can create an A record that is an alias to an AWS resource, like an S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-zombo.com/blob/aws-s3-hosted/serverless.yml#L59-L68"&gt;serverless.yml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DnsRecord:
      Type: "AWS::Route53::RecordSet"
      Properties:
        AliasTarget:
          DNSName: ${self:custom.aliasDNSName}
          HostedZoneId: ${self:custom.aliasHostedZoneId}
        HostedZoneName: ${self:custom.siteName}.
        Name:
          Ref: StaticSite
        Type: 'A'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The record requires an alias target which consists of a DNS name and a hosted zone ID. We’ll come back to those below. Notice the name property is a reference to the StaticSite S3 bucket resource. The name of the S3 bucket and DNS record need to match and by using a resource reference instead of a Serverless variable we ensure Cloudfromation creates the S3 bucket before it attempts to update DNS.&lt;/p&gt;

&lt;p&gt;With the Route53 record configured, the &lt;em&gt;aliasDNSName&lt;/em&gt; ,&lt;em&gt;aliasHostedZoneId&lt;/em&gt;, and &lt;em&gt;siteName&lt;/em&gt; keys need to be set in the custom section of the file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-zombo.com/blob/aws-s3-hosted/serverless.yml#L9-L15"&gt;serverless.yml&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;custom:
  hostedZoneName: serverless-zombo.com
  aliasHostedZoneId: Z3AQBSTGFYJSTF # us-east-1
  aliasDNSName: s3-website-us-east-1.amazonaws.com
  s3Sync:
    - bucketName: ${self:custom.hostedZoneName}
      localDir: static
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;SiteName&lt;/em&gt; value is being used as the name of the Route53 domain the record should be created in, and because we’re not using a subdomain, it doubles as the name of the record to create. The &lt;em&gt;aliasHostedZoneId&lt;/em&gt; is the zone ID of the S3 bucket domain. We’re using us-east-1 and have given the value for that. To get the appropriate &lt;em&gt;aliasHostedZoneId&lt;/em&gt; value, see the AWS &lt;a href="https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region"&gt;documentation&lt;/a&gt;. Also note that if we changed the region this was deployed to, we’d have to update that value too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying
&lt;/h2&gt;

&lt;p&gt;With the serverless.yml completely written, we can now deploy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless deploy -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if we browse to &lt;a href="http://serverless-zombo.com"&gt;http://serverless-zombo.com&lt;/a&gt;, we are brought back to internet in its glory days.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HgEj8hgl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/Serverless%2520Zombo%2520Home%2520Page.png%3Ft%3D1518484684999%26width%3D800%26height%3D500%26name%3DServerless%2520Zombo%2520Home%2520Page.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HgEj8hgl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/Serverless%2520Zombo%2520Home%2520Page.png%3Ft%3D1518484684999%26width%3D800%26height%3D500%26name%3DServerless%2520Zombo%2520Home%2520Page.png" alt="Serverless Zombo Home Page.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is just one way to create a static site on AWS using S3. It’s quick and it’s easy but it has its limitations. For one thing, unless you want to access the site via the S3 bucket URL and not the DNS record created, SSL will throw an error due to an invalid cert. S3 has no SSL cert for the DNS domain we’re using. In addition, we’re limited to a single AWS region which means the site is subject to issues in us-east-1 and we’d need to redeploy if the site was rendered unusable. S3 also isn’t the fastest at serving up web content.&lt;/p&gt;

&lt;p&gt;Using CloudFront would let us handle SSL certs, and distribute our content to multiple edge locations to avoid issues in a single datacenter region and provide quicker response times. Does that make CloudFront better? Why would you ever serve a site using S3?...&lt;/p&gt;

&lt;p&gt;The answer lays in the fact that the right system to build is dependent on your requirements. CloudFront is also an added cost. If you don’t have an SSL requirement, you can handle the occasional blip in us-east-1, and the slightly slower (possibly unnoticeable) response time difference is not an issue, then this might be the perfect solution for you.&lt;/p&gt;

&lt;p&gt;If this isn’t the right solution for you, we’ll cover S3 backed CloudFront in a following blog post.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This originally appeared on the &lt;a href="https://www.serverlessops.io/blog"&gt;ServerlessOps blog&lt;/a&gt;. Visit to read more of our work!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JjbwS03H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://track.hubspot.com/__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fstatic-websites-on-aws-s3-with-serverless-framework%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JjbwS03H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://track.hubspot.com/__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fstatic-websites-on-aws-s3-with-serverless-framework%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Serverless Framework Intro Project</title>
      <dc:creator>Tom McLaughlin</dc:creator>
      <pubDate>Tue, 30 Jan 2018 12:30:00 +0000</pubDate>
      <link>https://dev.to/tmclaughbos/serverless-framework-intro-project-192o</link>
      <guid>https://dev.to/tmclaughbos/serverless-framework-intro-project-192o</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zSQHIlId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/twitter-so-lambda-serverless-framework.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zSQHIlId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/twitter-so-lambda-serverless-framework.png" alt="twitter-so-lambda-serverless-framework.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Serverless is a cloud architecture that provides enormous benefits for operations engineers and &lt;a href="https://serverless.com/framework/"&gt;Serverless Framework&lt;/a&gt; makes it easy to get started. It’s useful for operations and devops engineers who have been writing scripts to automate parts of their environment regularly and for those just beginning to write their automation. For those who have been writing automation regularly, periodic tasks that ran on a random AWS EC2 instance can now be turned into a standalone system with automation logic and accompanying infrastructure code that is tracked via AWS CloudFormation. For those new to writing automation, serverless architecture can lower the barrier to entry for infrastructure automation.&lt;/p&gt;

&lt;p&gt;To demonstrate this, let’s build a simple service for our AWS environment. You may have a desire to track AWS console logins for reasons like compliance, auditing needs, or a desire to just have better visibility into your environment. We’ll build a Python 3 service that records console login CloudTrail events to an S3 bucket. We’ll extend this service with more features in future blog posts.&lt;/p&gt;

&lt;p&gt;The project for this blog post, aws-console-auditor, is located on GitHub. Additionally, because aws-console-auditor will evolve, a fork of the repo specifically for this blog post is also available.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ServerlessOpsIO/aws-console-auditor"&gt;aws-console-auditor&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro"&gt;serverless-aws-python3-intro&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;To get started, install the &lt;a href="https://serverless.com/framework/"&gt;Serverless Framework&lt;/a&gt;. It’s written in JavaScript and you’ll need to have NodeJS and NPM installed in order to do so.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Serverless Framework installed, we’ll go ahead and create a project. The &lt;em&gt;serverless&lt;/em&gt; script can create projects of different types using templates. We’ll create a project using the AWS Python 3 template and also install a plugin for handling python dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless create -t aws-python3 -n serverless-aws-python3-intro -p serverless-aws-python3-intro
cd serverless-aws-python3-intro
serverless plugin install -n serverless-python-requirements
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What will be created is a &lt;em&gt;serverless.yml&lt;/em&gt; template file, a basic handler script (which we’ll discard), and a .gitignore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagram the service
&lt;/h2&gt;

&lt;p&gt;Rather than jumping right into building our service, let’s take a moment to draw a system diagram. This may seem unconventional to some, especially for simple services, but this forces us to be sure of what we’re about to build before we ever start any time consuming coding.&lt;/p&gt;

&lt;p&gt;There are several options you can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.draw.io/"&gt;Draw.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.lucidchart.com/"&gt;Lucidchart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudcraft.co/"&gt;Cloudcraft&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://products.office.com/en-us/visio/visio-online"&gt;Visio&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I personally use Draw.io because it gives me access to the entire AWS icon set. This let’s me diagram a system with icons that are generally recognizable to people familiar with the AWS ecosystem.&lt;/p&gt;

&lt;p&gt;This is the system about to be built. Contained in the rectangle on the right is the service to be built. On the left is the workflow that will trigger the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ALW3cCO8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/serverless-aws-python3-intro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ALW3cCO8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.serverlessops.io/hs-fs/hubfs/blog/serverless-aws-python3-intro.png" alt="serverless-aws-python3-intro.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow of the system is as follows, a user logs into the console which generates a CloudTrail event and that event will trigger a Lambda that writes the event data to an S3 bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating serverless.yml
&lt;/h2&gt;

&lt;p&gt;Now that we know what system we’re about to build, we can start building it. Serverless Framework provides the ability to represent the system as code (well, YAML), deploy, and manage the system. The &lt;em&gt;serverless.yml&lt;/em&gt; will need to define three things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda function (and define the event to trigger it)&lt;/li&gt;
&lt;li&gt;S3 bucket&lt;/li&gt;
&lt;li&gt;IAM policy (not pictured above) to allow the Lambda to write to the bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s go through the different sections of the &lt;em&gt;serverless.yml&lt;/em&gt; now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial Configuration
&lt;/h3&gt;

&lt;p&gt;We’ll start with some initial configuration in the &lt;em&gt;serverless.yml&lt;/em&gt; file. We’ll replace the existing serverless.yml created by the template with the following below. There’s nothing wrong with the template serverless.yml but this is just a little clearer to follow and explain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/serverless.yml#L3-L18"&gt;serverless.yml:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;serverless-aws-python3-intro&lt;/span&gt;

&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-python-requirements&lt;/span&gt;

&lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;

  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;LOG_LEVEL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INFO&lt;/span&gt;

  &lt;span class="na"&gt;iamRoleStatements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First we start by providing the name of the service. In this case, it’s called &lt;em&gt;serverless-aws-python3-intro&lt;/em&gt;. We’ll see the service name used in many of the resources created for this service.&lt;/p&gt;

&lt;p&gt;The plugins section lists a single plugin, &lt;em&gt;serverless-python-requirements&lt;/em&gt;. This plugin will handle building and installing python dependencies specified in a standard &lt;em&gt;requirements.txt&lt;/em&gt; file. We don’t actually need this currently, but it’s useful to be introduced to the plugin now. We’ll probably use it later on as we evolve this service down the road. Also, if you go off after this to write a service of your own, knowing about this plugin is useful.&lt;/p&gt;

&lt;p&gt;The provider section defines the serverless platform provider and its configuration along with other configuration for the service’s component resources. The section will create an AWS serverless system in &lt;em&gt;us-east-1&lt;/em&gt; using your locally configured &lt;a href="https://serverless.com/framework/docs/providers/aws/guide/credentials"&gt;default AWS credentials&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;stage&lt;/em&gt; attribute is how Serverless Framework namespaces deployments of the same service within the same environment. That can be used if &lt;em&gt;development&lt;/em&gt; and &lt;em&gt;production&lt;/em&gt; share a single AWS account or if you want multiple developers to be able to deploy their own system simultaneously.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;region,&lt;/em&gt; &lt;em&gt;stage,&lt;/em&gt; and AWS profile name can be overridden on the command line using the &lt;em&gt;--region, &lt;/em&gt;&lt;em&gt;--stage,&lt;/em&gt; and_ --aws-profile_ arguments respectively. You can make region, stage, and profile more flexible using environmental variables but we’ll cover that in another blog post.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;LOG_LEVEL&lt;/em&gt; environment variable is a personal preference. This makes it easy to increase the logging level on functions for debugging purposes and decrease it when done. Below is how it’s used in our handler code.&lt;/p&gt;

&lt;p&gt;We'll leave &lt;em&gt;iamRoleStatements&lt;/em&gt; empty for now. We'll return to it after we've created our AWS resources and need to allow the Lambda function to write to the S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/handlers/write-event-to-s3.py#L13-L15"&gt;handlers/write-event-to-s3.py:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'LOG_LEVEL'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'INFO'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;setLevel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;getLevelName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;_logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;getLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  S3 Bucket
&lt;/h3&gt;

&lt;p&gt;Instead of moving directly onto the &lt;em&gt;functions&lt;/em&gt; section, we’ll move down to the &lt;em&gt;resources&lt;/em&gt; section. This section is for adding AWS resources. If you’re familiar with CloudFormation then good news, it uses the same syntax as CloudFormation YAML.&lt;/p&gt;

&lt;p&gt;Adding our service's S3 bucket is trivial: just add a resource and set the &lt;em&gt;Type&lt;/em&gt; attribute. Here’s what the configuration looks like for this service’s S3 bucket. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/serverless.yml#L50-L55"&gt;serverless.yml:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;LoginEventS3Bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::S3::Bucket&lt;/span&gt;
      &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;AccessControl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Private&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, the AccessControl policy will be &lt;em&gt;Private.&lt;/em&gt; That should probably be set in order to be explicit, so we’ve added that property. Serverless Framework will name the bucket based on the service and resource name. S3 buckets are globally unique across AWS accounts, and Serverless Frameworks will append a random string to help assure that you won’t collide with a bucket in a different account that has deployed the same service.&lt;/p&gt;

&lt;p&gt;How will your Lambda function know to write to this S3 bucket if the name is generated with a random string? We’ll show that later.&lt;/p&gt;

&lt;h3&gt;
  
  
  IAM Role Statements
&lt;/h3&gt;

&lt;p&gt;All functions have an IAM role, and in this example, the role will include a policy that allows our function to write to the S3 bucket. IAM roles can be defined two different ways: in the &lt;em&gt;provider&lt;/em&gt; section under &lt;em&gt;iamRoleStatements&lt;/em&gt; or in the &lt;em&gt;resources&lt;/em&gt; as an AWS::IAM::Role section.&lt;/p&gt;

&lt;p&gt;Defining IAM roles in the provider section is the commonly accepted way to define IAM roles and assign permissions with Serverless Framework. However, a single IAM role will be created for and used by all Lambda functions in the service. In this example, that is not an issue. However in larger services that have chosen a monorepo approach to code organization, you may want to exercise tighter control and create a role per function. The decision is up to you.&lt;/p&gt;

&lt;p&gt;Our &lt;em&gt;iamRoleStatements&lt;/em&gt; in the &lt;em&gt;provider&lt;/em&gt; section is as follow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/serverless.yml#L18-L28"&gt;serverless.yml:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;iamRoleStatements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
      &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;s3:PutObject&lt;/span&gt;
      &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Fn::Join:&lt;/span&gt;
          &lt;span class="s"&gt;- '/'&lt;/span&gt;
          &lt;span class="s"&gt;- - Fn::GetAtt:&lt;/span&gt;
            &lt;span class="s"&gt;- LoginEventS3Bucket&lt;/span&gt;
            &lt;span class="s"&gt;- Arn&lt;/span&gt;
          &lt;span class="s"&gt;- '*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The single IAM role statement gives the ability to write to the S3 bucket. Rather than giving the name of the S3 bucket, which will be auto generated on deployment, we use the CloudFormation built-in function to &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html"&gt;Fn::GetAtt&lt;/a&gt; to get the ARN of the bucket by resource ID.&lt;/p&gt;

&lt;h3&gt;
  
  
  Functions
&lt;/h3&gt;

&lt;p&gt;With supporting resources in place, we start adding to the &lt;em&gt;functions&lt;/em&gt; section to &lt;em&gt;serverless.yml&lt;/em&gt;. Below defines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/serverless.yml#L20-L36"&gt;serverless.yml:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;WriteEventToS3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;handlers/write-event-to-s3.handler&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;login&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;event&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;S3"&lt;/span&gt;
    &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python3.6&lt;/span&gt;
    &lt;span class="na"&gt;memorySize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;128&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;
    &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WriteEventToS3&lt;/span&gt;
    &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html#console_event_type&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cloudwatchEvent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;aws.signin"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;S3_BUCKET_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;Ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoginEventS3Bucket&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The value for &lt;em&gt;handler&lt;/em&gt; represents the &lt;em&gt;handler()&lt;/em&gt; function in the file &lt;em&gt;handler/write-event-to-s3.py&lt;/em&gt; in this repo. The runtime is python3.6 and has 128M of memory allocated with a 15 second execution timeout. The &lt;em&gt;role&lt;/em&gt; value is the IAM role resource name (not the IAM role name to be created) in the &lt;em&gt;resources&lt;/em&gt; section.&lt;/p&gt;

&lt;p&gt;In the &lt;em&gt;environment&lt;/em&gt; section, we set a shell variable called &lt;em&gt;S3_BUCKET_NAME&lt;/em&gt; with a value of the S3 bucket’s name. Since the name is autogenerated, the CloudFormation built-in function &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html"&gt;Ref&lt;/a&gt; is used to get the bucket’s name. This environment value will be &lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/handlers/write-event-to-s3.py#L18"&gt;checked by the handler’s code&lt;/a&gt; to know what S3 bucket to use.&lt;/p&gt;

&lt;p&gt;Lastly, there’s the &lt;em&gt;event&lt;/em&gt;. This is where you define what will trigger the function. In this case, a single event type, a CloudWatch event from the event source &lt;em&gt;aws.login&lt;/em&gt;, will trigger this function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Putting it all together
&lt;/h3&gt;

&lt;p&gt;With serverless.yml all put together, the system may not be functional, there’s no handler code, but it is deployable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sls deploy -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Handler
&lt;/h2&gt;

&lt;p&gt;Now let’s dive into the handler. When a console login event is generated by CloudWatch, it will call the &lt;em&gt;handler()&lt;/em&gt; function in the file &lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/handlers/write-event-to-s3.py"&gt;handlers/write-event-to-s3.py&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before we do the &lt;em&gt;handler()&lt;/em&gt; function code, let’s add some code that will be executed on first invocation of the function. When a Lambda function is executed, the AWS infrastructure keeps the function instance warm for subsequent requests. For python objects and variables that don’t need to be initialized on each invocation, eg. logging and boto3 objects or setting variables from the shell environment, these can be initialized outside of &lt;em&gt;handler()&lt;/em&gt; to speed up subsequent invocations of the function.&lt;/p&gt;

&lt;p&gt;The handler should have some logging. It’ll get the LOG_LEVEL environment variable value, set in the provider section of &lt;em&gt;serverelss.yml&lt;/em&gt;, and create a logging object of that level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/handlers/write-event-to-s3.py#L13-L15"&gt;handlers/write-event-to-s3.py:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'LOG_LEVEL'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'INFO'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;setLevel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;getLevelName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log_level&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="n"&gt;_logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;getLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next initialize the code’s boto3 S3 client object and get the name of the S3 bucket to use from the shell environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/handlers/write-event-to-s3.py#L17-L18"&gt;handlers/write-event-to-s3.py:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;s3_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'s3'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="n"&gt;s3_bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'S3_BUCKET_NAME'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The handler function is executed on every function invocation. It’s passed two variables by AWS, the &lt;em&gt;event&lt;/em&gt; that triggered the Lambda and the &lt;em&gt;context&lt;/em&gt; of the invocation. The context variable stores some useful information, but isn’t needed in this example so we’ll just ignore it.&lt;/p&gt;

&lt;p&gt;The handler function is pretty straight forward. If &lt;em&gt;LOG_LEVEL&lt;/em&gt; were set to ‘DEBUG’, then the function would log the event received. It then gets the CloudWatch event detail, calls a function that returns an S3 object key path from the event detail. Next, it writes the event detail to the S3 bucket. A log message at our standard logging level, INFO, will log information about the event that will automatically be picked up by CloudWatch. Finally, the function returns&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ServerlessOpsIO/serverless-aws-python3-intro/blob/master/handlers/write-event-to-s3.py#L37-L65"&gt;handlers/write-event-to-s3.py&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="s"&gt;'''Lambda entry point.'''&lt;/span&gt;
    &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Event received: {}'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;

    &lt;span class="c1"&gt;# We're going to ignorethe CloudWatch event data and work with just the
&lt;/span&gt;    &lt;span class="c1"&gt;# CloudTrail data.
&lt;/span&gt;    &lt;span class="n"&gt;event_detail&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'detail'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Get our S3 object name from the CloudTrail data
&lt;/span&gt;    &lt;span class="n"&gt;s3_object_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_get_s3_object_key_by_event_detail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event_detail&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Write the event to S3.
&lt;/span&gt;    &lt;span class="n"&gt;s3_resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;s3_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put_object&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;ACL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'private'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event_detail&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;Bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;s3_bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;s3_object_key&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s"&gt;'Console login event {event_id} at {event_time} logged to: {s3_bucket}/{s3_object_key}'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;event_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;event_detail&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'eventID'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;event_time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;event_detail&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'eventTime'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;s3_bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;s3_bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;s3_object_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;s3_object_key&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;s3_resp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the code is in place, then the service can be deployed again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sls deploy -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, every time someone logs into the AWS Console, the login information will be recorded. You can check the cloudWatch logs for brief information and then S3 for the login details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We’re now collecting AWS console login events from our environment, and storing them in S3 to give us an audit trail of who is accessing the console. But you probably want more. You may want to send a Slack notification to a channel so people are aware. You may want to send a PagerDuty notification if the login does not match certain criteria. How about searching through the event data to find trends in your logins? We can extend this service to add new features and we’ll do that in future blog posts.&lt;/p&gt;

&lt;p&gt;If you want to see the code used in this blog, check it out on github here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/tmclaugh/serverless-aws-python3-intro"&gt;https://github.com/tmclaugh/serverless-aws-python3-intro&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or take a look at AWS Console Auditor here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/tmclaugh/aws-console-auditor"&gt;https://github.com/tmclaugh/aws-console-auditor&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;This originally appeared on the &lt;a href="https://www.serverlessops.io/blog"&gt;ServerlessOps blog&lt;/a&gt;. Visit to read more of our work!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UoEK_5Qu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://track.hubspot.com/__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fserverless-intro-project%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UoEK_5Qu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://track.hubspot.com/__ptq.gif%3Fa%3D277116%26k%3D14%26r%3Dhttps%253A%252F%252Fwww.serverlessops.io%252Fblog%252Fserverless-intro-project%26bu%3Dhttps%25253A%25252F%25252Fwww.serverlessops.io%25252Fblog%26bvt%3Drss" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
  </channel>
</rss>
