<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Priyanka Bisht</title>
    <description>The latest articles on DEV Community by Priyanka Bisht (@priyanka_bisht_567bb3341b).</description>
    <link>https://dev.to/priyanka_bisht_567bb3341b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/priyanka_bisht_567bb3341b"/>
    <language>en</language>
    <item>
      <title>Embrace AWS Well Architected Framework</title>
      <dc:creator>Priyanka Bisht</dc:creator>
      <pubDate>Mon, 29 Mar 2021 13:43:40 +0000</pubDate>
      <link>https://dev.to/priyanka_bisht_567bb3341b/embrace-aws-well-architected-framework-3jg7</link>
      <guid>https://dev.to/priyanka_bisht_567bb3341b/embrace-aws-well-architected-framework-3jg7</guid>
      <description>&lt;p&gt;AWS created the Well Architected Framework – a set of criteria guiding reviews of a given workload.  The framework is directive, but not prescriptive, as to how to solve the problem, and should be used to ensure that all key aspects of a workload’s lifecycle, security, resilience and operability are considered.&lt;/p&gt;

&lt;p&gt;The AWS Well-Architected Framework enables you to review and improve your cloud-based architectures and better understand the business impact of your design decisions. We address general design principles as well as specific best practices and guidance in five conceptual areas that we define as the pillars of the Well-Architected Framework.&lt;br&gt;
(&lt;a href="https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf"&gt;https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The framework defines 5 key pillars to review your workload:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational Excellence&lt;/strong&gt; – The ability to run and monitor systems to deliver business value and continually improve supporting processes and procedures. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt; – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt; – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Efficiency&lt;/strong&gt; – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Optimisation&lt;/strong&gt; – The ability to run systems to deliver business value at the lowest price point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational Excellence&lt;/strong&gt;&lt;br&gt;
The Operational Excellence pillar is focused on the ability to run and monitor systems to deliver business value, while still retaining a focus on improving processes and procedures. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nYYaREYI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdgvb13apyfkhhjsdvzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nYYaREYI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdgvb13apyfkhhjsdvzj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Operational Excellence pillar defines six design principles for cloud:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Perform operations as code – you should apply mature software engineering practices to your infrastructure and operations systems. By performing operations as code, you limit human error and enable consistent responses to events. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Annotate documentation – Pre-cloud documentation is hand crafted and rarely in sync with the systems it describes. In environments such as cloud, our focus should be automating documentation as part of the build and deploy process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make frequent, small, reversible changes – The pace of change is rapid, and so our changes need to be smaller, faster, simpler to describe and safer to apply. By enabling safe reversals we can limit (or remove) impact to end customers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refine operations procedures frequently – As systems are put into use, we learn how they behave in the real world. These lessons need to be captured and shared to ensure we are learning from all opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Anticipate failure – Recognise that failure is not only always present, but often more recoverable than in traditional infrastructure. Ensure that you invest time looking at what might fail, then validate these scenarios to understand if your procedures and processes are resilient to failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn from all operational failures – Follow on from anticipating failure, make sure that there is a shared approach to learning from failures that do occur.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
The Security pillar is focused on the ability to protect information, systems and assets while still enabling the regular delivery of value.&lt;br&gt;
The cloud is a game changer for security.  Through the introduction of automation and Infrastructure as Code there are now an increasing number of inspection points and controls that can protect a system from malicious or inadvertent compromise and exploitation.&lt;/p&gt;

&lt;p&gt;There are seven design principles in the Security pillar:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Implement a strong identity foundation – Use the principle of least privilege to ensure users and systems have access only to what they need. By connecting this with a central identity management service you can reduce the risk of leaked or exfiltrated credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable traceability – Monitoring, Alerting and Auditing of system changes in real time is a key architectural design pattern for resilient and secure systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defend in depth – You no longer need to focus on security at a single point, such as an edge network firewall, but can introduce security controls and defence points to all layers of a workload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate security best practices – With an increasing number of changes defined “as code” you enable a more highly auditable system.  Continual compliance strategies evolve through automating security, and are more achievable in a cloud environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Protect data in transit and at rest – Nearly all services AWS provides allow strong encryption at rest, and for most situations configuration and management is straightforward.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keep people away from data – Review your architecture to ensure that you reduce or eliminate the need for direct access to data. Automated data activities allow increased auditability to change and impact of data operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prepare for security events – Being prepared requires consideration of what can go wrong and having an incident management process that suits your organisational needs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt;&lt;br&gt;
The Reliability pillar looks at how a system responds to failure, be that infrastructure or service disruption.  Having a resilient system is also key to ensure your service can scale up to meet increasing capacity demands, and scale down without impacting customer experience.&lt;br&gt;
The Reliability pillar defines five key design principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Test recovery procedures – Testing failure modes, and testing at scale is difficult to do pre-cloud, and is regularly put in the too hard basket; the time required to set up a valid test environment can be prohibitive.  Ensuring your processes focus on understood and practiced recovery procedures build confidence in the design.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate recovery from failure – Many failures can be detected and resolved through automation available from the monitoring platform. More advanced detection approaches can even preempt failure and begin early mitigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scale horizontally to increase aggregate system availability – By replacing one large asset with multiple smaller ones, you spread the risk of failure to a smaller percentage of the entire system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stop guessing capacity – Resource starvation and contention are regular causes of failure, but measuring actual usage puts you in a position to identify and remediate key bottlenecks sooner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Manage change in automation – Deliver all changes through automation, so you can regularly assess and validate that recovery or remediation activity based on the same migration works.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Performance Efficiency&lt;/strong&gt;&lt;br&gt;
The Performance Efficiency pillar focuses on the ability to use the available resources efficiently and to maintain that efficiency as demands change.&lt;br&gt;
The Performance Efficiency pillar looks at five key design principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Democratize advanced technologies – Leverage AWS’ platform experience by consuming services in more of a product or service architecture. By pushing complexity into AWS’ responsibility domain, you can access cutting-edge complex solutions as a service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go global in minutes – Ensure your design can take advantage of AWS’s global footprint to deliver lower latency and better experience for your customers all around the globe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use serverless architectures – Through reducing the level of management you have to apply in order to deliver a service, you can repurpose effort into higher value activities, and lower the overall transaction cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Experiment more often – Things constantly and rapidly change; ensuring you have a process to review and evaluate the changes will allow you to exploit the efficiencies of a changing cloud environment while identifying and managing risks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Develop mechanical sympathy – Align your use of technology to the needs of what you are trying to achieve.  It doesn’t have to be a one size fits all approach.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Cost Optimisation&lt;/strong&gt;&lt;br&gt;
The Cost Optimisation pillar addresses a review of design and best practices to enable a workload to deliver business value for the lowest appropriate cost.&lt;br&gt;
The Cost Optimisation Pillar focused on five design principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Adopt a consumption model – If you are not using it, turn it off or delete it. Over 70% of the hours in a week are outside standard business hours.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Measure overall efficiency – Ensure you are measuring the business value of a workload to review against the cost of the system to determine overall efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stop spending money on data centre operations – outsource the lowest common denominators, those business expenses which don’t differentiate you from your competition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analyse and attribute expenditure – If you can’t measure it, you can’t manage it.  Same goes for cost, if you can’t attribute the source of expenditure, you can’t assess if it is appropriate or excessive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use managed and application level services to reduce cost of ownership – Total cost of ownership of a service is more than just the hardware run cost. By leveraging the appropriate managed services you can drastically reduce the operational costs and risks.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>wellarchitected</category>
    </item>
    <item>
      <title>Infrastructure as Code for Python Developer- Part 1 - Troposphere</title>
      <dc:creator>Priyanka Bisht</dc:creator>
      <pubDate>Mon, 29 Mar 2021 13:04:21 +0000</pubDate>
      <link>https://dev.to/priyanka_bisht_567bb3341b/infrastructure-as-code-for-python-developer-part-1-troposphere-51lp</link>
      <guid>https://dev.to/priyanka_bisht_567bb3341b/infrastructure-as-code-for-python-developer-part-1-troposphere-51lp</guid>
      <description>&lt;p&gt;In the AWS world, Infrastructure as code is not a new concept but a hot topic as lot of improvisation had happened in this area.&lt;br&gt;
After working with CloudFormation templates for a while, one can  notice several shortcomings that make templates long, clunky, and nigh unreadable. So what are the alternatives from a python developer lens: &lt;a href="https://github.com/cloudtools/troposphere"&gt;Troposphere&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/work-with-cdk-python.html"&gt;AWS CDK&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Troposphere python library allows for easier creation of the Aamzon CloudFormation JSON by writing Python code to describe the AWS resources. This effectively allows you to programmatically define your infrastructure without becoming as limited as we are with plain CloudFormation.&lt;/p&gt;

&lt;p&gt;Let' install it:&lt;br&gt;
&lt;code&gt;pip install troposphere&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The Troposphere team has some great examples in their GitHub repository. I suggest taking a look there and working from their examples.&lt;/p&gt;
&lt;h4&gt;
  
  
  CloudFormation
&lt;/h4&gt;

&lt;p&gt;CloudFormation, is a managed service by AWS: the user must simply write a YAML or JSON file describing all the infrastructure upload it on S3 or directly to Cloudformation and the service will take care of running it safely and statefully.&lt;br&gt;
However, CloudFormation has its own drawbacks: YAML files are often very verbose and difficult to write and debug and do not support advanced logic and loops. &lt;br&gt;
Let’s look at the raw CloudFormation template to create VPC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Description: AWS CloudFormation Template to create a VPC
Parameters:
  SftpCidr:
    Description: SftpCidr
    Type: String
Resources:
  SftpVpc:
    Properties:
      CidrBlock: !Ref 'SftpCidr'
      EnableDnsHostnames: 'true'
      EnableDnsSupport: 'true'
    Type: AWS::EC2::VPC
  RouteTablePrivate:
    Properties:
      VpcId: !Ref 'SftpVpc'
    Type: AWS::EC2::RouteTable
  PrivateSubnet1:
    Properties:
      AvailabilityZone: !Select
        - 0
        - !GetAZs
          Ref: AWS::Region
      CidrBlock: !Select
        - 4
        - !Cidr
          - !GetAtt 'SftpVpc.CidrBlock'
          - 16
          - 8
      MapPublicIpOnLaunch: 'false'
      VpcId: !Ref 'SftpVpc'
    Type: AWS::EC2::Subnet
  PrivateSubnet2:
    Properties:
      AvailabilityZone: !Select
        - 1
        - !GetAZs
          Ref: AWS::Region
      CidrBlock: !Select
        - 5
        - !Cidr
          - !GetAtt 'SftpVpc.CidrBlock'
          - 16
          - 8
      MapPublicIpOnLaunch: 'false'
      VpcId: !Ref 'SftpVpc'
    Type: AWS::EC2::Subnet
  PrivateSubnet3:
    Properties:
      AvailabilityZone: !Select
        - 2
        - !GetAZs
          Ref: AWS::Region
      CidrBlock: !Select
        - 6
        - !Cidr
          - !GetAtt 'SftpVpc.CidrBlock'
          - 16
          - 8
      MapPublicIpOnLaunch: 'false'
      VpcId: !Ref 'SftpVpc'
    Type: AWS::EC2::Subnet
  SubnetPrivateToRouteTableAttachment1:
    Properties:
      RouteTableId: !Ref 'RouteTablePrivate'
      SubnetId: !Ref 'PrivateSubnet1'
    Type: AWS::EC2::SubnetRouteTableAssociation
  SubnetPrivateToRouteTableAttachment2:
    Properties:
      RouteTableId: !Ref 'RouteTablePrivate'
      SubnetId: !Ref 'PrivateSubnet2'
    Type: AWS::EC2::SubnetRouteTableAssociation
  SubnetPrivateToRouteTableAttachment3:
    Properties:
      RouteTableId: !Ref 'RouteTablePrivate'
      SubnetId: !Ref 'PrivateSubnet3'
    Type: AWS::EC2::SubnetRouteTableAssociation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Troposphere
&lt;/h4&gt;

&lt;p&gt;Troposphere is really simple: it is just a Python DSL which maps CloudFormation Entities (all of them!) to Python classes and the other way round. This gives us a very simple way to create a template that looks exactly like we want but is generated through a high level easily maintainable language. Furthermore, Python IDEs will help us fixing problems without even running the YAML template and the compilation step to YAML will break if we create inconsistent references. The python troposphere script which generated the script is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import troposphere.ec2 as vpc

template = Template()
template.set_description("AWS CloudFormation Template to create a VPC")

sftp_cidr = template.add_parameter(
        Parameter('SftpCidr', Type='String', Description='SftpCidr')
    )

vpc_sftp = template.add_resource(vpc.VPC(
        'SftpVpc',
        CidrBlock=Ref(sftp_cidr),
        EnableDnsSupport=True,
        EnableDnsHostnames=True,
    ))

private_subnet_route_table = template.add_resource(vpc.RouteTable(
        'RouteTablePrivate',
        VpcId=Ref(vpc_sftp)
    ))

for ii in range(3):
    private_subnet = template.add_resource(vpc.Subnet(
        'PrivateSubnet' + str(ii + 1),
        VpcId=Ref(vpc_sftp),
        MapPublicIpOnLaunch=False,
        AvailabilityZone=Select(ii, GetAZs(Ref(AWS_REGION))),
        CidrBlock=Select(ii + 4, Cidr(GetAtt(vpc_sftp, 'CidrBlock'), 16, 8))
    ))
    private_subnet_attachment = template.add_resource(vpc.SubnetRouteTableAssociation(
        'SubnetPrivateToRouteTableAttachment' + str(ii + 1),
        SubnetId=Ref(private_subnet),
        RouteTableId=Ref(private_subnet_route_table)
    ))

print(template.to_yaml())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code is readily readable and understandable even if it was automatically generated by a troposphere based script. As can immediately be seen most of the code is duplicated since we created 3 subnets with relative attachments to a routing table. &lt;/p&gt;

&lt;p&gt;Running this script after installing Troposphere (pip install troposphere) will print the CF YAML shown above. As you can see the python code is much more compact and easy to understand. Furthermore, since Troposphere maps all the native cloudformation YAML functions (e.g. Ref, Join, GettAtt, etc.) we don’t even need to learn anything new: every existing CF template can easily be converted in a Troposphere template.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Troposphere is comparable to AWS CloudFormation templates, but in Python - offering the features of a programming language. Troposphere is a very simple way to reap all the advantages of CloudFormation together with the abstraction level provided by a modern programming language and it greatly simplifies CloudFormation code development and deployments.&lt;br&gt;
In next part, I'll explain about AWS CDK.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Cost Optimization Practices</title>
      <dc:creator>Priyanka Bisht</dc:creator>
      <pubDate>Sun, 21 Mar 2021 07:13:58 +0000</pubDate>
      <link>https://dev.to/priyanka_bisht_567bb3341b/aws-cost-optimization-practices-5dem</link>
      <guid>https://dev.to/priyanka_bisht_567bb3341b/aws-cost-optimization-practices-5dem</guid>
      <description>&lt;p&gt;Amazon Web Services is probably the biggest IaaS provider and a formidable cloud computing resource. Apart from having an amazing pricing policy that users adore, AWS actually releases documents on how to optimize server costs and how to use the resources efficiently to make the most of what you use and pay for. On Amazon Web Services (AWS) cloud, philosophy on pricing is simple. At the end of each month, you pay only for what you use, and you can start or stop using a product at any time. We take a look at cost optimization on Amazon Web Services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Few AWS Cost Optimization Best Practices:
&lt;/h2&gt;

&lt;h4&gt;
  
  
  A.Pay as you go:
&lt;/h4&gt;

&lt;p&gt;You might have heard this term a lot, especially when you are a continuous user of the AWS cloud server. Pay as you go is a simple concept – No minimum commitments or long-term contracts required. You replace your upfront capital expense with low variable cost and pay only for what you use. There is no need to pay upfront for excess capacity or get penalized for under-planning; this is one of the foremost service side cost optimization embedded into the pricing policy of AWS.&lt;/p&gt;

&lt;h4&gt;
  
  
  B. Pay less when you reserve:
&lt;/h4&gt;

&lt;p&gt;For certain products, you can invest in reserved capacity. In that case, you pay a low upfront fee and get a significantly discounted hourly rate, which results in overall savings between 42% and 71% (depending on the type of instance you reserve) over equivalent on-demand capacity.&lt;/p&gt;

&lt;h4&gt;
  
  
  C. EC2 reserved instances optimization:
&lt;/h4&gt;

&lt;p&gt;Checks your Amazon Elastic Compute Cloud (Amazon EC2) computing consumption history and calculates an optimal number of Partial Upfront Reserved Instances. Recommendations are based on the previous calendar month’s hour-by-hour usage aggregated across all consolidated billing accounts. It is an important aspect of cost optimizations that lets you speculate the number of hours usage you need this month based on previous month’s aggregate.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jJLgNL4U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrl3bfwfwufe4bwjr4o2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jJLgNL4U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrl3bfwfwufe4bwjr4o2.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  D. Low Utilization Amazon EC2 Instances:
&lt;/h4&gt;

&lt;p&gt;This utility checks the number of EC2 instances that are running on less than 10% run capacity or usage time. It reports instances that were running at any time during the last 14 days and alerts you if the daily CPU utilization was 10% or less and network I/O was 5 MB or less on 4 or more days. Running instances generate hourly usage charges.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NhvCH--G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxcxgcaeyl2e2xvqqrzc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NhvCH--G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxcxgcaeyl2e2xvqqrzc.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  E. Underutilized Amazon EBS Volumes:
&lt;/h4&gt;

&lt;p&gt;Checks Amazon Elastic Block Store (Amazon EBS) volume configurations and warns when volumes appear to be underused. Charges begin when a volume is created. If a volume remains unattached or has very low write activity (excluding boot volumes) for a period of time, the volume is probably not being used and can be discarded by the user.&lt;/p&gt;

&lt;p&gt;If none of the utilities and pricing options works for you, you can go for the custom pricing option available with the AWS customer support. You can use the AWS Simple Monthly Calculator to estimate your monthly bill. The calculator provides per service cost breakdown, as well as an aggregate monthly estimate. You can also use the calculator to see an estimation and breakdown of costs for common solutions.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cost</category>
      <category>awscost</category>
    </item>
    <item>
      <title>Audit Resources With AWS Config</title>
      <dc:creator>Priyanka Bisht</dc:creator>
      <pubDate>Sun, 21 Mar 2021 06:57:32 +0000</pubDate>
      <link>https://dev.to/priyanka_bisht_567bb3341b/audit-resources-with-aws-config-42g0</link>
      <guid>https://dev.to/priyanka_bisht_567bb3341b/audit-resources-with-aws-config-42g0</guid>
      <description>&lt;p&gt;Rapid infrastructure deployment, scaling with just a few clicks, paying only for what you actually use, and offloading the responsibility of managing services to your cloud provider while focusing your efforts on your product are just a few of these.&lt;br&gt;
Given the need for placing increased attention on security (which should always be a critical factor to consider) and compliance with required standards, it is important to understand AWS Config which can help you audit these aspects of your business. If you work with Amazon’s public cloud, it’s a service that is available to you and this post will look at what it is, what it does, and why you should use it in your Amazon cloud environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is AWS Config?
&lt;/h2&gt;

&lt;p&gt;AWS Config is a service created specifically for assessing, monitoring, and auditing configuration changes within the AWS cloud by using various rules. It is a fully managed service, and it works by continuously recording resources configurations to a chosen S3 bucket and comparing them to the desired state. You can look at detailed configuration histories, review these configuration changes, and, most importantly, respond to anything that is not matching the predefined rules. Whatever your compliance standards or security requirements might be, AWS Config can be of great use to you.&lt;br&gt;
AWS Config comes with a lot of built-in rules. They are easily searchable; all you have to do is pick a rule to be enforced.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FLfmDGWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ij2xkf1t2nbr7s4q0d8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FLfmDGWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ij2xkf1t2nbr7s4q0d8f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The evaluation of these rules can happen on a configuration change or be set up to run periodically at a scheduled time. Additionally, you can take a remediation action. This can be anything from sending a message to an SNS topic telling you that the change has taken place to something more active, like starting or even terminating a non-compliant instance.&lt;/p&gt;

&lt;p&gt;AWS Config’s existing 100 or so built-in rules cover most common use cases and requirements. If these are not enough, or if you simply need more control over the remediation process, you can add custom rules as well. Custom rules rely upon AWS Lambda functions, but you need to create those Lambda functions yourself so that they can best suit your specific needs. After you have the desired S3 Lambda in place, you simply add a custom rule that will trigger it.&lt;/p&gt;

&lt;p&gt;The use of AWS Config costs $0.003 per configuration recorded within your AWS account, with an additional $0.001 per each rule evaluation. On top of those costs, you need to pay for the S3 storage cost of the configuration files (they don’t consume much space, so that cost should be minimal) as well as SNS and Lambda costs if you use them for notifications or custom rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Config Rules to Consider
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0h8X6a-0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtj1lk95xz5q2akg0715.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0h8X6a-0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtj1lk95xz5q2akg0715.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1. IAM ROOT ACCESS KEY CHECK
&lt;/h4&gt;

&lt;p&gt;One of the very first things that you should do when creating a new AWS account is create an admin user that will be used to create everything else needed in that account. The root user itself should be locked and its password safely stored away. You should never have AWS access keys (for programmatic access) generated for the root user. Since that user has unlimited control over your account, losing that user can have serious consequences. This is why the “iam-root-access-key-check” rule is very important.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. ROOT ACCOUNT MFA ENABLED
&lt;/h4&gt;

&lt;p&gt;This rule goes hand in hand with the previous one. It ensures that multi-factor authentication is enabled for your root user. Storing your root password safely is still recommended, but this additional step will add another layer of security when it comes to protecting your entire AWS environment from an intrusion.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. ACCESS KEYS ROTATED
&lt;/h4&gt;

&lt;p&gt;Even with root user information safely stored away, you will still have various users with different levels of access, along with admins who probably have almost complete control over the AWS infrastructure and services.&lt;br&gt;
To make sure that your resources are properly protected, best practices dictate the regular rotation of programmatic access keys. This rule will check to see whether or not the keys have been rotated within the number of days specified. The number of days you set here depends on the level of security you need to enforce.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. S3 BUCKET PUBLIC WRITE AND PUBLIC READ PROHIBITED
&lt;/h4&gt;

&lt;p&gt;One of the most common mistakes people make when working in the AWS Cloud is having their S3 buckets publicly accessible. Whether or not you have data that needs to be kept private and secure, making sure that your buckets are only accessible to those who need access to them is of utmost importance. After all, data is everything these days.&lt;/p&gt;

&lt;p&gt;There are exceptions, of course. You might need to serve content that should be publicly available, or you might use your buckets so that others can upload the data to them. Either way, understand your use case, and make sure to run an environment that is as secure as possible.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. EC2 INSTANCE NO PUBLIC IP
&lt;/h4&gt;

&lt;p&gt;This is another rule that handles unwanted public access to your cloud resources. By misconfiguring a VPC (an AWS service that deploys and controls an entire network on top of which your AWS resources will run), you can easily have your instances running with a public IP address. Unless this is something you need to do as a business requirement, it is wise to make sure that your EC2 instances are running with only a private IP address attached. Instead of checking everything by hand or creating a custom solution to do this for you, this rule will have you covered.&lt;/p&gt;

&lt;h4&gt;
  
  
  6. DESIRED INSTANCE TYPE
&lt;/h4&gt;

&lt;p&gt;If your company wants to use a specific instance type, this rule will make sure that you don’t have EC2 instances running anything undesired. You can use this to ensure that more expensive instances aren’t running and adding unnecessary costs. This rule can also help you be certain that your AWS environment using only the most optimal instance type for your product.&lt;/p&gt;

&lt;h4&gt;
  
  
  7. CLOUDFORMATION STACK DRIFT DETECTION CHECK
&lt;/h4&gt;

&lt;p&gt;Infrastructure as a code has received a great deal of attention lately, and most customers rely on AWS CloudFormation (though Terraform has been making an impact as well) to templatize and provision their whole infrastructure. However, larger companies and bigger projects can end up with a stack drift—a situation whereby your defined stack differs slightly from what is actually running in your AWS cloud.&lt;/p&gt;

&lt;p&gt;With AWS Config, you can easily detect stack drift using the “cloudformation-stack-drift-detection-check” rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;AWS Config provides a simple way for your cloud environment to stay secure and compliant, thanks to its various predefined rules that can be deployed quickly and easily. In this article, we have reviewed only some of these rules, but there are many more that can be very useful for protecting your resources in the cloud. Additionally, creating custom rules affords you limitless possibilities. AWS Config is a powerful tool that should be considered by every business running on Amazon’s cloud.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>governance</category>
      <category>awsconfig</category>
    </item>
  </channel>
</rss>
