<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael Hyland</title>
    <description>The latest articles on DEV Community by Michael Hyland (@theworldoftheweb).</description>
    <link>https://dev.to/theworldoftheweb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/theworldoftheweb"/>
    <language>en</language>
    <item>
      <title>Understanding Mutable vs Immutable Infrastructure</title>
      <dc:creator>Michael Hyland</dc:creator>
      <pubDate>Fri, 02 Sep 2022 12:56:35 +0000</pubDate>
      <link>https://dev.to/theworldoftheweb/understanding-mutable-vs-immutable-infrastructure-5h6</link>
      <guid>https://dev.to/theworldoftheweb/understanding-mutable-vs-immutable-infrastructure-5h6</guid>
      <description>&lt;p&gt;The world of computer technology, especially the space that governs modern internet applications, has evolved rapidly. The journey to the cloud and compute-level virtualisation in the last decade has advanced the way in which operational management of compute resources are handled.&lt;/p&gt;

&lt;p&gt;Obviously hardware will always exist but (mostly) gone are the days of enterprise tech staff minting servers, running them down to a private data centre, racking them and configuring physical networks for access. This concept has not disappeared, as hardware existing in data centres will always be necessary, it has just become more specialised.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WDGMb3kC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vptaej5g26gwbc6yuvlo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WDGMb3kC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vptaej5g26gwbc6yuvlo.jpeg" alt="Image description" width="880" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a relatively old school operational person, I don't miss those days. The sheer overhead of the tasks involved certainly didn't lend itself to productivity, especially if the objective was to simply have an application accessible on the internet.&lt;/p&gt;

&lt;p&gt;Want to upgrade a server? Buy a new one, mint it with an OS, configure it in house, pack it in the van, drive it down to the data centre (sign in with all the identification needed), rack it up (don't forget your screw drivers at the office!) and plug in the network cable (hold thumbs that your network configurations work).&lt;/p&gt;

&lt;p&gt;Now think about doing all of this at scale. With all the intricacies of operating system, application and data requirements (which change over time), compute performance, monitoring, network access and the need for precise and exact configurations, one could very quickly get disheartened, question your life choices and wish you had kept that job as a beach barman.&lt;/p&gt;

&lt;p&gt;Those less sophisticated times, where things moved slightly slower than today, could be understood to be truly mutable, where infrastructure component replacement was slow, cumbersome, prone to failure, risky, one dimensional and most significant of all, peripheral changes were depressingly inevitable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting a Grip of the Definitions
&lt;/h2&gt;

&lt;p&gt;Understanding the definitions of mutable and immutable in the context of compute infrastructure is important since it is not intuitive and glaringly apparent at first glance.&lt;/p&gt;

&lt;p&gt;The adjective "Mutable", has the meaning of "liable to change". In other words anything that is mutable is likely to change. In turn, the adjective "Immutable" has the meaning of "unchanging over time or unable to be changed." The reverse understanding can then be applied to anything that is immutable, it will not or cannot change.&lt;/p&gt;

&lt;p&gt;How do we apply these definitions in the context of compute infrastructure? The best way to understand it is to view the definition concepts as a desired end state of stable infrastructure.&lt;/p&gt;

&lt;p&gt;The desired state of stable infrastructure should be current and error free. Any peripheral changes made to this desired state introduces risk, inconsistency and complexity, all of which can compromise stability and should be avoided.&lt;/p&gt;

&lt;p&gt;But how does one avoid change? One needs to discern the correct action to take. In the context of modern compute infrastructure this means that the action of replacement should be valued higher than the action of peripheral change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Change vs Replace
&lt;/h2&gt;

&lt;p&gt;Continuing with the idea of definitions in an infrastructure context, to change something means to modify that existing thing and make it different in some way. To replace is to move something in place of something else.&lt;/p&gt;

&lt;p&gt;With this in mind, mutable infrastructure means that existing components are modified. With immutable infrastructure, these components will never be modified, they will simply be replaced.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wv7OADYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3d43alxmhkkz0fieiysd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wv7OADYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3d43alxmhkkz0fieiysd.jpeg" alt="Image description" width="880" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the advancement of public cloud virtualisation and elasticity, the ability to run immutable infrastructure has increased exponentially across a host of application requirements.&lt;/p&gt;

&lt;p&gt;One can consider immutable infrastructure, once configured, stable and in place never to change. If the existing infrastructure component is changed in any way, outside of predefined immutable boundaries, it is no longer immutable and can be considered compromised, resulting in a host of unanticipated, mostly negative results which in turn impact productivity, reliability and security.&lt;/p&gt;

&lt;p&gt;Alternatively, replacement should be considered the desired action in an immutable infrastructure environment. This can apply to compute resources, application configurations and a host of other infrastructure related components.&lt;/p&gt;

&lt;p&gt;The action of replacement can be done in very controlled fashion whereas changes often happen in a more risky, haphazard and ad-hoc manner. This makes the action of peripheral change undesirable.&lt;/p&gt;

&lt;p&gt;Great. But how does all of this look in practice?&lt;/p&gt;

&lt;h2&gt;
  
  
  Immutable Infrastructure in Practice
&lt;/h2&gt;

&lt;p&gt;The complexities of modern compute infrastructure can range from rudimentary to the mind boggling insane, depending on requirements. One could be running a simple web application on a single virtual machine instance servicing minimal requests or running an enterprise application requiring sophisticated containerised orchestration servicing millions of requests a second. In either case, and anything in between, the ideal would be to implement a strategy of immutable infrastructure.&lt;/p&gt;

&lt;p&gt;To keep it simple, let's examine a more rudimentary example. The example will cover application configurations with an infrastructure-as-code approach and server replacement as an infrastructure upgrade approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IK2lSP6i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7o6g8ajrdhydpyxh948v.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IK2lSP6i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7o6g8ajrdhydpyxh948v.jpeg" alt="Image description" width="601" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Updates
&lt;/h2&gt;

&lt;p&gt;The above graphic represents a simple infrastructure. The infrastructure details are that it is a single virtual machine in a cloud enabled environment running the Apache web server, serving a website. This infrastructure can be considered version one in terms of the compute resource and the server configuration.&lt;/p&gt;

&lt;p&gt;A common scenario would be that the configuration for the Apache web server needs to be modified. If the initial configuration is considered version 1 of the configuration, the aim would be to implement version 2 with the required modifications.&lt;/p&gt;

&lt;p&gt;A mutable approach would be to log onto the server, make the required modifications to the configuration and manually reload the Apache service. This action, most likely, could introduce error and inconsistency that, over time, will become problematic. Also, the simple action of logging onto a system with elevated privileges to make configuration changes establishes an insecure operational model.&lt;/p&gt;

&lt;p&gt;The immutable method would be to replace the configuration in a controlled fashion using an infrastructure-as-code procedure. The advantage is many fold. The modification can be rolled out automatically and orderly across multiple environments, ensuring consistency. The modification will exist as a persistent record through source control. This enables a smoother rollback, if necessary and keeps a track on the change that has been implemented. Security is heightened in the sense that privileges are allocated to the procedure rather than an individual user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RhTwIWQT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zx5aa1jgr51rffkkb5fn.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RhTwIWQT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zx5aa1jgr51rffkkb5fn.jpeg" alt="Image description" width="601" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Updates
&lt;/h2&gt;

&lt;p&gt;What would an infrastructure update look like in the sense of a major update requirement? It's not uncommon that infrastructure update requirements arise relatively often. These update requirements can take many forms but the overall strategy of replace over change should be the preferred method. In our example, the requirement to update web server technology has arisen. This translates into replacing the Apache web server with an Nginx web server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KWEObrSB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/if9jgfwathl7kz6r10xp.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KWEObrSB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/if9jgfwathl7kz6r10xp.jpeg" alt="Image description" width="880" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The mutable approach would be an attempt to decommission the Apache web server then install and configure the Nginx web server on the same virtual machine. The risk and complexity that this  action would introduce simply does not serve a method of infrastructure stability. How would one keep the application running and avoid disruption during the upgrade? How would we reliably test that the application is working as expected after an upgrade? How would we possibly manage this at scale with all the inconsistencies that would likely arise? What other unknown "noise" could appear that would compromise stability? All of these questions should deter one from this method and to rather implement one of immutability.&lt;/p&gt;

&lt;p&gt;The immutable approach would be to replace the entire virtual machine in a controlled fashion. One would bring up and configure the version 2 of the infrastructure in parallel with version 1. This allows for a measured and orderly deployment of the new infrastructure that can be thoroughly tested to ensure the stability of the application. While the new infrastructure is deployed, configured and tested the old infrastructure is still available and serving the application to the end user. Once the new infrastructure is deemed stable, traffic is simply switched to the new infrastructure without end user awareness resulting in a smooth, beneficial transition. If a rollback is required, the traffic can be switched back. Once the new infrastructure is satisfactorily serving the application, the old infrastructure can be decommissioned in a controlled fashion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wNlZZQcg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/st4nr91wzw31eoa7ftef.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wNlZZQcg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/st4nr91wzw31eoa7ftef.jpeg" alt="Image description" width="880" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Although the example we have examined is rudimentary, the overall approach of immutability can and should certainly be applied to more complex and sophisticated use cases. With the advancement of cloud elasticity, infrastructure-as-code, virtualisation and containerisation the options available to deploy immutable infrastructure are myriad.&lt;/p&gt;

&lt;p&gt;That being said, deploying complex immutable infrastructure is ambitious, filled with challenge and demands a solid commitment from modern cloud engineers and organisations that want to maintain an advantage in the space of modern internet applications.&lt;/p&gt;

</description>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS VPC Fundamentals</title>
      <dc:creator>Michael Hyland</dc:creator>
      <pubDate>Mon, 14 Sep 2020 21:31:29 +0000</pubDate>
      <link>https://dev.to/theworldoftheweb/aws-vpc-fundamentals-2kfa</link>
      <guid>https://dev.to/theworldoftheweb/aws-vpc-fundamentals-2kfa</guid>
      <description>&lt;p&gt;The AWS Virtual Private Cloud (VPC) is an elemental feature of AWS. A number of services and resources within your AWS configuration rely upon the VPC and how it is configured. Hence, it’s important to have a fundamental understanding of how the VPC works, is configured, and how other core network services integrate with it&lt;/p&gt;

&lt;p&gt;This blog will cover some of the essential steps to creating a VPC and configuring core network services. The topics to be covered are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Default VPC&lt;/li&gt;
&lt;li&gt;Creating a new VPC&lt;/li&gt;
&lt;li&gt;Creating and configuring subnets&lt;/li&gt;
&lt;li&gt;Network security and security groups&lt;/li&gt;
&lt;li&gt;Route tables, the internet gateway and NAT gateways&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The Default VPC
&lt;/h1&gt;

&lt;p&gt;After one creates an AWS account, you will have a default VPC available in each region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7tCeDqPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x0pz5wnbn19gehltg6ir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7tCeDqPi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x0pz5wnbn19gehltg6ir.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above case I have set a Name tag called “default” as a Name tag for default VPC’s will not be set. The purpose of the default VPC is to get you up and running quickly. It will include core default resources like a &lt;strong&gt;default route table&lt;/strong&gt;, a &lt;strong&gt;default internet gateway&lt;/strong&gt; attached to the default VPC, a &lt;strong&gt;default security group&lt;/strong&gt;, a &lt;strong&gt;default network ACL&lt;/strong&gt; and &lt;strong&gt;default subnets&lt;/strong&gt; in each availability zone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gtfisctg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qe6o3m9yjxfsen44rybg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gtfisctg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qe6o3m9yjxfsen44rybg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will allow you to immediately begin deploying compute resources into the default VPC and using additional services like the Elastic Load Balancer, Amazon RDS, and Amazon EMR. You are able to modify the components of your default VPC to meet your needs but when learning or setting up a new environment you will want to create a new VPC with all its related components.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a new VPC
&lt;/h1&gt;

&lt;p&gt;The very first thing one needs to do when creating a new VPC, after giving it a name, is to select an &lt;a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing"&gt;IPv4 CIDR block&lt;/a&gt;. In other words, choose an IP address range for your VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NNAMc_x---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yy49cqvpk2jjigvt2wvc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NNAMc_x---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yy49cqvpk2jjigvt2wvc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three main recommendations when choosing a range for your VPC. They are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure the range is one of the RFC1918 ranges

&lt;ul&gt;
&lt;li&gt;10.0.0.0 - 10.255.255.255&lt;/li&gt;
&lt;li&gt;172.16.0.0 - 172.31.255.255&lt;/li&gt;
&lt;li&gt;192.168.0.0 - 192.168.255.55&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Choose a /16 address range

&lt;ul&gt;
&lt;li&gt;This will allow for 65538 addresses and should be enough 
for most initial use cases&lt;/li&gt;
&lt;li&gt;The allowed block size is between /16 and /28&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Avoid ranges that overlap with other networks

&lt;ul&gt;
&lt;li&gt;If unsure, start with a smaller address range&lt;/li&gt;
&lt;li&gt;Then add additional noncontiguous ranges to your VPC 
later on&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you wish, you may also include an IPv6 CIDR block. This blog will only focus on IPv4 configurations. It should be noted that in AWS, the first four IP addresses and the last IP address cannot be used. AWS uses these for their own internal purposes. Unless you have strict requirements to run your VPC in a dedicated tenancy like regulatory compliance or licensing restrictions where compute resources need to run on dedicated hardware, choose the default tenancy. To learn more about AWS tenancy check out Understanding AWS Tenancy.&lt;/p&gt;

&lt;p&gt;On the creation of a new VPC, the following will also be automatically created and associated to the new VPC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single main route table&lt;/li&gt;
&lt;li&gt;A single default security group&lt;/li&gt;
&lt;li&gt;A single network access control list&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Creating and Configuring Subnets
&lt;/h1&gt;

&lt;p&gt;An AWS region is divided into multiple availability zones. Subnets will reside within individual availability zones and are a sub range of the IP space created for the VPC. Multiple subnets are created in multiple availability zones for the purpose of deploying high availability applications and subnets are essentially how you get your resources into availability zones.&lt;/p&gt;

&lt;p&gt;When creating your subnets, after assigning a name, you will select the VPC that the subnet is associated with. Then select an availability zone (if one is not selected, it will be automatically applied). Next, type a sub range of the IP space as a CIDR block for the subnet. In the below example we are creating three public subnets and assigning a range of 172.31.0.0/24, 172.31.1.0/24 and 172.31.2.0/24 respectively. The first public subnet will reside in availability zone af-south-1a, the second in availability zone af-south-1b and the third in af-south-1c. The intention would be to deploy compute resources across all three availability zones to ensure high availability of the compute resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7jHgHFcR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xifc2s0w5c94u4ltxm20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7jHgHFcR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xifc2s0w5c94u4ltxm20.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cPKc_3BD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eyb0ksq80hokpl4gqcf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cPKc_3BD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eyb0ksq80hokpl4gqcf5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gzby8IKP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o6ejug77twmrs9f9aph8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gzby8IKP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o6ejug77twmrs9f9aph8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CIDR block of /24 will allow each subnet to have 256 addresses of which AWS reserves five, leaving 251 useable addresses per subnet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yn46VyQh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nzvhb26rfn0x5jsxoxqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yn46VyQh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nzvhb26rfn0x5jsxoxqa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above diagram introduces the concept of public and private subnets. We will address this concept later on in more detail when we talk about security groups and NAT gateways.&lt;/p&gt;

&lt;p&gt;In the diagram, the public subnets are represented in green and the private subnets are represented in blue.&lt;/p&gt;

&lt;p&gt;It should be noted that for simplicity, this example does not include a load balancer for instances in the public subnet. Load balancers are out of the scope of this blog.&lt;/p&gt;

&lt;p&gt;Above we created, what we called, three public subnets in each availability zone. You would do the same with what would be described as private subnets: one private subnet in each availability zone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4gtAQn5I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wq67lbzwe4tnqicye2vj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4gtAQn5I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wq67lbzwe4tnqicye2vj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The concept is simple: compute resources that reside in the public subnets will be accessible from the internet and have both public and private IP addresses. The resources that reside in the private subnet will not be accessible from the internet and only have private IP addresses. When creating compute resources like EC2 instances, on launching of the instances, one can define whether the instance will have a public IP address and if so, it can be assumed it will reside in a public subnet. Access to the subnets are controlled by security groups.&lt;/p&gt;

&lt;h1&gt;
  
  
  Network Security with Security Groups
&lt;/h1&gt;

&lt;p&gt;Security groups are simple, yet powerful tools that help you do network security within your VPC. In a traditional data center, they are akin to a firewall. Security groups can be considered the first defense layer protecting your VPC resources. The second layer of defense are Network Access Control Lists (NACL). The core differences between the two are that security groups are stateful, meaning a request coming from one direction automatically sets up permissions for the response to that request coming from the other direction. NACL’s are stateless, meaning that inbound and outbound rules need to be explicitly set. Security groups are also associated with specific compute instances and NACL’s are applicable at the subnet level and will include all compute resources in the subnet. NACL configurations are optional for VPC’s and are useful for a large number of instances within a subnet that need fine grain decision making on network traffic. NACL’s are not in the scope of this blog and for our use case, the default NACL configuration for the created VPC is used.&lt;/p&gt;

&lt;p&gt;In our example we have 2 sets of compute resources: web servers and application servers. These servers will also reside in public and private subnets respectively. The web servers in the public subnet will accept traffic from the internet and in the course of handling this traffic they will offload requests to backend application servers in the private subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LL-o-udX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wnhv1jte6qzm00sybqh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LL-o-udX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wnhv1jte6qzm00sybqh8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each group of instances shares a common purpose. Which means different rules will apply to each batch of compute resources and they will reside within different security groups. For example, the web servers will have a security groups rule that allows web traffic from everywhere on the internet. For the backend application servers however, their security group rule will state that only tcp port 5100 traffic from the web servers will be accepted, creating a secure flow of traffic to these application servers. This is described in the diagram above. &lt;strong&gt;(A third availability zone is assumed)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To configure this on the console you would start by creating the first security group for the web servers. Give the security group a name, a description and the appropriate VPC. Then edit the inbound rules for this particular security group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sjf1ff3E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/npadp1v8l3q16z5yz06c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sjf1ff3E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/npadp1v8l3q16z5yz06c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security group rules are quite flexible, allowing for many combinations of possible rules. In this case, what we want to do is allow all HTTP traffic or web traffic, with the TCP protocol on port 80 from the internet. In a live scenario, this will probably be encrypted HTTPS traffic in port 443. All other traffic will be blocked by default. Once you have created the rule, the security group configuration for the web servers will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---v9vhijW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1o7oi8xzf5bho42bonrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---v9vhijW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1o7oi8xzf5bho42bonrq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the backend application servers, we do not want traffic sourced from the internet, or any other source except for the web servers. So, we will create a security group for the backend servers and edit the inbound rules to look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y7HTCnvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uky55krye63wa0ij4o29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y7HTCnvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uky55krye63wa0ij4o29.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that, in this case, the custom TCP rule’s source will be configured as the Web server’s security group. This will only allow traffic to port 5100 for the backed servers from any instance within the Web servers security group making for a very flexible and almost zero maintenance configuration while still applying the principle of least privilege.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HGZVXWG9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/m3h5spbe25tjvin3wh9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HGZVXWG9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/m3h5spbe25tjvin3wh9u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Route tables, the Internet Gateway and the NAT Gateway
&lt;/h1&gt;

&lt;p&gt;Compute resources within subnets, inside of the VPC need to relay traffic within the VPC as well as to other destinations outside of the VPC. This is managed by route tables. Route tables are a list of rules stating that traffic for a certain destination should go to the appropriate target gateway.&lt;/p&gt;

&lt;p&gt;When the VPC is first created, a default route table is assigned, it has a single route with a target of “local” which applies to traffic within the VPC only. All other route tables created will also have the same default single route. Initially, the default route table is also known as the main route table, subsequent route tables can also be set as the main route table but there can only be one main route table at a time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xqZ6HyHl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cnvvwfgr6ddi0ohfvb1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xqZ6HyHl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cnvvwfgr6ddi0ohfvb1j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main route table for a VPC will have a slightly different quality to the other route tables that may exist. This is true in the sense that it cannot be deleted and that all subnets are implicitly associated to the main route table unless they are explicitly associated with another route table that exists. You can also explicitly associate subnets to the main route table. This can be done in case you would want to change your main route table for any reason.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AvhWRALQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qexofsnrzk73svctf6lq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AvhWRALQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qexofsnrzk73svctf6lq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are a number of ways of managing routes and route tables. One could create a route table for each subnet and associate each subnet to an allotted route table. One could create a private and a public route table and associate your private and public subnets to each respectively. The route table strategy you select will largely depend on how you decide to design your VPC infrastructure.&lt;/p&gt;

&lt;p&gt;In our example, I’ll create a single public route table for all three public subnets and for the private subnets, one route table for each. The three public subnets will be associated to the single public route table and each private subnet will be associated with each private route table. The reason for one public route table and three private route tables will become clearer when we add NAT gateways for the private subnets.&lt;/p&gt;

&lt;p&gt;Let’s see this in action and how we create the gateways, routes and subnet associations for both our private and public subnets. To send traffic out to the internet from the public subnets, an internet gateway needs to be created and attached to the VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kw2QVvUg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7zec49vtx5jj2l4g1igu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kw2QVvUg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7zec49vtx5jj2l4g1igu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the internet gateway is created, with a name tag, one then attaches it to a VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1C1uzAoy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0n9uptbjdkz14cfc0jhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1C1uzAoy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0n9uptbjdkz14cfc0jhx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The internet gateway will now be in an attached state and ready to pass traffic to the internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I7b8PQag--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w1pgwt6cot89es6lar6e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I7b8PQag--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w1pgwt6cot89es6lar6e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we will create our public route table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KgxINe_L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v25qo9qw0kr7qswzw9qy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KgxINe_L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v25qo9qw0kr7qswzw9qy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point you will need to assign a new route to the public route table. The destination will be 0.0.0.0/0. The new route will allow all traffic that is not destined for the VPC to go out to the internet.  The route should be configured to point to the internet gateway as the target, allowing traffic to be routed out to the internet for the public subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2rfAyViH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5l5sh8jr3qo6m9p1vz48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2rfAyViH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5l5sh8jr3qo6m9p1vz48.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Things to be aware of here is that the order of the rules do not matter as it deals with the most specific route. Also, the internet gateway cannot be considered a single point of failure. This resource is an abstraction backed by something highly available. Let’s now associate the public subnets with the public route table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mLweY3Zc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ugd40x1z3kuhq7w5dye5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mLweY3Zc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ugd40x1z3kuhq7w5dye5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The public subnets are now explicitly associated to the public route table. This means that compute instances that are deployed to the public subnet with a public IP address will be able to get access out to the internet via the internet gateway.&lt;/p&gt;

&lt;p&gt;I’ve also created three private route tables and have associated each private subnet respectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0NgjgNym--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pbvrc5d2jrkvnotg8zw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0NgjgNym--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pbvrc5d2jrkvnotg8zw5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The web servers in the public subnet gain internet access out via routes to the internet gateway. What about the backend application servers in the private subnet? They may need to do software updates or other actions that require access to the internet. This is where the NAT gateway comes in.&lt;/p&gt;

&lt;p&gt;A NAT gateway will allow resources in the private subnet access out to the internet. When you create a NAT gateway, you choose a public subnet and allocate an elastic IP to the NAT gateway.  In our example I will create three NAT gateways in each public subnet and create a route for each private route table to point to the relevant target NAT gateway. The NAT gateways must exist in a public subnet as they will have public IP addresses allocated and will need access to the internet via the internet gateway.&lt;/p&gt;

&lt;p&gt;Create a NAT gateway in each public subnet and allocate an elastic IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ckzIVeDO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n853jjhtowt3qk40wzzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ckzIVeDO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n853jjhtowt3qk40wzzn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new route in each private subnet and point it to its corresponding NAT gateway target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2792ltfM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/skdpoxxt446asgfvpl1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2792ltfM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/skdpoxxt446asgfvpl1q.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once each NAT gateway is created in each public subnet, the private route tables will all have a route out to the internet via their corresponding target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yt8SqLQC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kqm1qcbkvxbe8p1pwcvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yt8SqLQC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kqm1qcbkvxbe8p1pwcvm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This has achieved the following: any compute instances deployed to the public subnets will have access out to the internet via the internet gateway and their public IP address. Any compute instances deployed to the private subnets will have access out to the internet via the NAT gateway, which has a public IP address associated with it. The below graphic describes the configuration. &lt;strong&gt;(A third availability zone is assumed)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jAHlnwF7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xphntqdqb7ypwokqa7rz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jAHlnwF7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xphntqdqb7ypwokqa7rz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason three NAT gateways are created in the three public subnets and three route tables in the private subnets, is for high availability. If there are degraded services in any availability zone, the private compute instances in the remaining working availability zones will still have access to a NAT gateway. For the public subnets, one route table is sufficient because as previously stated, the internet gateway is, by default, highly available.&lt;/p&gt;

&lt;p&gt;In an upcoming blog post, I will be discussing compute instances in more detail and how they are deployed to the VPC.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Understanding the AWS VPC and its components is vital to becoming more familiar with AWS’s extended services. This has been a VPC fundamentals crash course and the reader is encouraged to continue exploring the AWS VPC on his own to become comfortable with the fundamental networking and configuration concepts. Once again critical feedback is encouraged.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AWS Authentication and Authorization with Policies</title>
      <dc:creator>Michael Hyland</dc:creator>
      <pubDate>Wed, 26 Aug 2020 20:10:42 +0000</pubDate>
      <link>https://dev.to/theworldoftheweb/aws-authentication-and-authorization-with-policies-37g7</link>
      <guid>https://dev.to/theworldoftheweb/aws-authentication-and-authorization-with-policies-37g7</guid>
      <description>&lt;p&gt;In a previous post on in the AWS 101 series, I covered some details on &lt;strong&gt;Identity and Access Management&lt;/strong&gt; and touched on IAM policies. In this blog, I will be expanding on the concept of policies and how they are used to authorize identities, whether human users or applications, to perform actions in an AWS account.&lt;/p&gt;

&lt;h1&gt;
  
  
  Authentication
&lt;/h1&gt;

&lt;p&gt;Understanding how authentication happens in AWS is not critical but is interesting to know and offers some insight when attempting to write good policies. Whether you are using the console, CLI or SDK, the authentication is handled by AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9DHnvh7d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ao2eyz36ukjhvv4mbe6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9DHnvh7d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ao2eyz36ukjhvv4mbe6v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above diagram, we have a principal or identity attempting to make a call to the S3 service. The attempted action is a listing on an object in a bucket using the ListObjects API call. Like everything else in AWS, this call will be authenticated and authorized. It’s a role, so it will have short term credentials. If we look at the output generated by the debug flag on the CLI command, we can see the role that is providing the credentials for the call.&lt;/p&gt;

&lt;p&gt;This role happens to be called &lt;strong&gt;S3ReadOnlyForEC2&lt;/strong&gt;. Next, you can see the bucket endpoint. In this case, the bucket is in af-south-1. Then further down, you will notice the &lt;strong&gt;Credential&lt;/strong&gt; header. The first part of this header is the Access Key ID. This is passed in the clear because there is nothing secret about it. The secret part is the secret access key. The Access key ID is an identifier for the caller of this API call and the principal they claim to be. They have not proven it yet, but it’s who they claim to be.&lt;/p&gt;

&lt;p&gt;Lastly, is the &lt;strong&gt;Signature&lt;/strong&gt;. The signature is an HMAC signature and is produced client side with the secret access key that belongs to the session since the identity is a role. The crucial parts of the API call were signed and if this signature is validated by AWS, it means that the call was definitely made by the principal and the contents of the call has not been meddled with. Essentially proving the principal’s identity.&lt;/p&gt;

&lt;p&gt;Proving the identities validity is only the first part of the process. The next step is whether the identity actually has permissions to make this call. This is where &lt;strong&gt;Policies&lt;/strong&gt; come into play.&lt;/p&gt;

&lt;h1&gt;
  
  
  Authorization with Policies
&lt;/h1&gt;

&lt;p&gt;Unlike authentication, which is handled mostly by AWS, it’s important to understand how policy authorization works. Authorization is understood to be extremely literal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UVGQkEFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zsgswbls595if57tq5kv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UVGQkEFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zsgswbls595if57tq5kv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above basic graphic indicates how policies decide the authorization status of a principal. In this example it can be assumed the principal has successfully authenticated. Policies will now decide whether the desired action to the S3 service can be fulfilled. There will be a number of policies associated to the request. Some will be associated with the principle making the call, some possibly with the resource or service and some with a number of other things.&lt;/p&gt;

&lt;p&gt;These policies will contain several statements. There could be statements that relate to services other than the one in question and these statements can be considered irrelevant. The policies will be evaluated, and determinations will be made as to which are relevant and which are not. Once the relevant statements have been determined, each one will either allow or deny.&lt;/p&gt;

&lt;p&gt;The overall rule is simple: &lt;strong&gt;a policy must allow, and no policy must deny. If no policy claims the request, it’s denied&lt;/strong&gt;. In AWS, one needs permission to execute any action, otherwise you will not be able to. In the above example, the policy on the left is relevant and is allowing the request. The policy on the right is also relevant but it is denying the request. Hence, the overall judgement on authorization will be deny.&lt;/p&gt;

&lt;h1&gt;
  
  
  Reading and Writing IAM Policy
&lt;/h1&gt;

&lt;p&gt;Understanding the literal nature of IAM policies is best shown by example. The example we will be looking at is one that represents an application running on EC2 and connecting to DynamoDB. The EC2 instance has a role associated to it. We want to give the role permission to read and write to DynamoDB. Let see the different options available to us and which option is the most effective in regard to the principle of least privilege.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XpeiYuHE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/alrpdng00id8dijdvbqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XpeiYuHE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/alrpdng00id8dijdvbqe.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We could write a policy like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6zB-j4MJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pmtmm4jt6yt7dqlnvr3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6zB-j4MJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pmtmm4jt6yt7dqlnvr3d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This policy will definitely work by allowing all actions. But this can be considered a bad policy for the requirements of the application. This policy is akin to full administrative permissions and provides way too many actions that this application will probably ever need and introduces risk into the equation. We could improve the policy by doing this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LZ76c858--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v7ii86b01ushnwzsu453.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LZ76c858--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v7ii86b01ushnwzsu453.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This policy is a slight improvement, since it limits the actions to the dynamodb service. But it allows all actions to the service and to all resources related to the service. The policy is still unacceptable since it still allows too much permission for the purpose of the application. Let’s keep improving the policy. What if we tried this?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y2czHPvd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/upogwohmmsak6e3t0v6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y2czHPvd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/upogwohmmsak6e3t0v6q.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a vast improvement because it states specific actions. Like all the services on AWS, dynamodb has many API’s associated with it. There are API’s that can delete tables, provision tables etc. However, we know that this application only needs to read and write to the tables. So those are the only API actions that should be allowed. What about the resource field of the policy? It’s not ideal. How can we do better?&lt;/p&gt;

&lt;p&gt;To write an optimal policy, one should become familiar with the looking up the service details on the IAM policy reference page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hdglfHT3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mtqyzsr16lxdslakwhpj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hdglfHT3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mtqyzsr16lxdslakwhpj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the page you will see a link called "Actions, Resources, and Condition Keys for AWS Services". This page will show a list of each and every AWS service. You can then select the service you are attempting to write a policy for. In our case we will choose Amazon DynamoDB. You will be presented with a structured table to show you how to write the policy.&lt;/p&gt;

&lt;p&gt;One of our actions were GetItems, so below is the row that describes this action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tWhpzleu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1xx2vac9nvudyzmpessu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tWhpzleu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1xx2vac9nvudyzmpessu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will notice that the resource type that applies is a table. This implies that with this action, one can authorize specific tables. How do we apply this to the resource field in the policy?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pIbvtin2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7egqw2exkgd28g27f8ya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pIbvtin2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7egqw2exkgd28g27f8ya.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the string that now appears in the resource field. This is called an &lt;strong&gt;Amazon Resource Name (ARN)&lt;/strong&gt;. Each resource on AWS has an ARN and they are all formatted in the same fashion. After "arn:aws", the ARN takes the format of: the name of the service, the region, the aws account number and then the service specific part. This uniquely identifies the resource across all of AWS. How does one get the ARN for this specific resource? Once again, let’s look at the documentation. If one clicks on "table*" in the Resource Types column in the above table your will be presented with a page that lists the "&lt;strong&gt;Resource Types Defined by Amazon DynamoDB&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f6HXpbHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7z11nv53kuqrc4mgpxni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f6HXpbHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7z11nv53kuqrc4mgpxni.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will show you the format that the ARN needs to take to comply with the resource that the policy requires. The full policy will now look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BjPv6VZ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iu1vazuf9op90zmxv6uy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BjPv6VZ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/iu1vazuf9op90zmxv6uy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The policy now has specific actions of reading and writing and a specific resource of a dynamo table. It follows the principle of least privilege and will match the request with the exact requirements and can be considered the optimum policy for the applications requirements. It should be noted that policies can be more complex than this, with additional attributes. For this example we've chosen a simple policy for the purpose of illustration. The reader is encouraged to further explore policies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Authorization practices in AWS should always follow the principle of least privilege. IAM policies are designed in such a way to achieve just this. If configured correctly, one can rest assured that the manner in which services are accessed by principals, whether human or machine, are always done in the most secure manner possible. As always, any feedback is encouraged.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AWS Identity and Access Management</title>
      <dc:creator>Michael Hyland</dc:creator>
      <pubDate>Tue, 25 Aug 2020 22:12:44 +0000</pubDate>
      <link>https://dev.to/theworldoftheweb/aws-identity-and-access-management-2ffl</link>
      <guid>https://dev.to/theworldoftheweb/aws-identity-and-access-management-2ffl</guid>
      <description>&lt;p&gt;AWS identity and access management (IAM) enables access to AWS accounts and their related cloud resources in a secure fashion. It facilitates the administration of users and groups with associated permissions to allow and deny access to services and resources. In this blog, I will discuss the basic principles of IAM identities: security in the cloud, user access, roles and policies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Security before and after the Cloud
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--75gNEg2J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9k4ysoxilhe9lnxgqnrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--75gNEg2J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9k4ysoxilhe9lnxgqnrc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before the days of the cloud, compute resources and related services would typically reside in a corporate datacenter. These resources would be secured by a firewall device ensuring unwanted traffic is not accessible to the network and only allowing accepted traffic. There would be a security officer or security administrator overseeing the implementation of security protocols for the datacenter. His or her job would rely on the active measures taken to ensure the datacenter was not breached. This was usually an unenviable post, burdened with anxiety. In this scenario, security was essentially implemented at the perimeter of the datacenter. The possibility for finer grain control was usually limited to domain specific areas where admins would need to be experts in these domains, for example database access controls etc.&lt;/p&gt;

&lt;p&gt;In the modern cloud era, the elasticity of the cloud is a consequence of concepts like only paying for what you use and provisioning resources with API’s. These approaches allow administrators to work much faster than their predecessors. Since everything is controlled by API’s, authentication and authorization exists at every level. In AWS, all accessible resources and services are mediated by IAM. This level of security control is pervasive in the positive sense, reaching all aspects of the AWS cloud and uniform across the entire surface. It should be noted that IAM is not the only method to secure services in the AWS cloud and, in fact, there are numerous services dedicated to aspects of security but the service that all AWS users will and need to use is IAM, the fundamental security control of AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OT9VZhhC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j83us9l27z9lvmgfew92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OT9VZhhC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j83us9l27z9lvmgfew92.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  IAM Identities
&lt;/h1&gt;

&lt;p&gt;Once you have registered with AWS via a credit card, you will be assigned an AWS account. An account, for lackof a better word, is a container for cloud resources and services that are available to you to be deployed to facilitate your experience in the cloud. This also includes identities. Depending on your needs as an individual or organization, you may have to register multiple accounts which can be collected together in something called an AWS organization.&lt;/p&gt;

&lt;p&gt;Multiple accounts are able to facilitate the division of environments, which in turn may promote an even more secure and robust overall ecosystem. For example, one account could be dedicated to a development environment, another to a staging environment and another to a production environment and so on. Larger organizations could have 100’s or even 1000’s of accounts all allocated to different aspects and divisions of the business managing distinct workloads, using AWS organizations to divide them up into organizational units. An organization managing multiple accounts will typically have a master account that is used exclusively for billing and the management of security controls throughout the organization.&lt;/p&gt;

&lt;p&gt;No matter how you choose to manage your accounts, whether single or multiple, IAM identities will reside within AWS accounts and be assigned permissions to access resources. Identities can also be referred to as principals. A principal is simply someone or something in IAM that can be authenticated. Depending on the organizations existing infrastructure, there are choices available as to how the organization wants to manage identities.&lt;/p&gt;

&lt;p&gt;If an existing user directory is available and the organization wishes to continue using it, there are a several of options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Single Sign On (SSO) with directory integrations&lt;/strong&gt;. This allows SSO capabilities with integration to products like Azure Active Directory. The Active Directory identities can be synced into the AWS SSO service and users will use the Active Directory credentials and then the entitlements would be mapped based on users and groups into the required roles in the account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your own Security Assertion Markup Language (SAML) federation&lt;/strong&gt;. This option would work best for corporate users in an Active Directory on premises or managed by AWS who are run Active Directory Federation Service. The Active Directory administrator could then map the users and groups to the required roles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom federation for more advanced use cases&lt;/strong&gt;. This option can be used for custom identity tools or third-party SaaS offerings. This will also require more granular control over user permissions as well as rules to map groups of users to AWS environments.&lt;/p&gt;

&lt;p&gt;If the organization does not have an existing user directory, the options are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS SSO user pool&lt;/strong&gt;. AWS offers an SSO service for access to accounts. Users will be created directly in the SSO service. This is called an SSO user pool. Users authenticate via the SSO service and then get mapped to their required roles. Even if there are multiple accounts, each user will only have a single set of credentials, manage via the SSO pool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM users&lt;/strong&gt;. This is the most basic of use cases and common for when getting started with AWS and relatively small-scale accounts. Each IAM user will have long term credentials within AWS and will not be mapped to roles. IAM users will sign in directly into the console with a password or via the CLI using an access and secret key associated with their long-term account.&lt;/p&gt;

&lt;p&gt;The takeaway here is that access to AWS is either via a user or a role but no matter what method is being used for access, it can always be considered an IAM identity in an account.&lt;/p&gt;

&lt;h1&gt;
  
  
  IAM Roles
&lt;/h1&gt;

&lt;p&gt;As discussed in the previous section, a role is similar to a user in the sense that it is an identity with permissions determining the actions of the identity. There are, however, some key differences. A role is not associated with one user but is intended to be assumed by any user who needs it. Roles do not have any persistent, long-term credentials either. When using roles, users are in a temporary session for that role with temporary security credentials that expire once the role session terminates. Regarding security of the overall account, roles can be considered somewhat preferable.&lt;/p&gt;

&lt;p&gt;A significant aspect of roles is that they can be assigned to applications for non-human access. Any application written for the cloud will most likely need to access other resources in the cloud. For example, an application running on an EC2 instances or a lambda function may have to access an S3 bucket. There are a multitude of compute environments on AWS and anywhere that one is supplying one’s own logic it’s almost always the case that there is some access request being made to other resources via API calls. These API calls will require authentication and authorization and subsequently an identity needs to exist for this to take place. This identity should be an IAM role.&lt;/p&gt;

&lt;p&gt;AWS compute environments are optimized to integrate with IAM roles. Engineers or developers working with applications in the cloud accessing resources do not have to handles credentials. Resources would be associated with roles and the compute environment handles the temporary credentials, making them seamlessly available via the SDK to the application requesting access to the particular resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nIy6KLll--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xf51a5g3u31ck126j5k7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nIy6KLll--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xf51a5g3u31ck126j5k7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  IAM Policies
&lt;/h1&gt;

&lt;p&gt;In the previous sections we had a look at identities. But identities on their own are not sufficient for actions in the account. For identities to perform actions, they need associated permissions. This is controlled via IAM policies.&lt;/p&gt;

&lt;p&gt;Policies, when associated with an identity or resource, defines their permissions and are critical in the sense that AWS uses the principle of least privilege. It’s a simple yet effective idea: give the user or process the minimal amount of permission required to get the job done. This means that any IAM identity, before it is associated with a policy, will not be able to perform any action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hsS1e7fl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8paoy6zhil6j08ltuz6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hsS1e7fl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8paoy6zhil6j08ltuz6r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the policies section in IAM one will be presented with a list of managed policies. Each AWS service has a number of managed policies. They are a curated list of permissions created by AWS for the purposes of associating them with services or resources. These are managed policies that can be associated to job roles. For example, DatabaseAdministrator,  NetworkAdministrator and AdministratorAccess. The managed policies are very useful for assigning access to human users since they include access to adjacent services. For example, the DatabaseAdministrator policy will give the user access to the relational database services as well as the associated limited access that they would require in EC2 and other related services. An example of the AdministratorAccess role is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w_8oALaZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kqsclhuqbjuygut5lik0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w_8oALaZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kqsclhuqbjuygut5lik0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This role is simple, yet the most powerful. It allows all action on all resources.&lt;/p&gt;

&lt;p&gt;An example of the DatabaseAdministrator role is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZsHyeXKF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u7y09w3ldqdfjq4ap6z7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZsHyeXKF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u7y09w3ldqdfjq4ap6z7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This shows some of the resources and the related actions allowed in this role.&lt;/p&gt;

&lt;p&gt;Managed rules with a combination set of permissions can be customised for use cases or new rules can be created with customised combinations. As previously stated, these are great for human access requiring more complex set of actions but should not be used for applications. Application code tends to do very specific things only and policies should be written to correspond to these deterministic actions.&lt;/p&gt;

&lt;p&gt;Writing policies for applications should always use the principle of least privilege and this gets us into the realm of authorization which is a subject all on its own related to policy best practices and will be covered in an upcoming blog.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Security in the AWS cloud is multifaceted. Unlike the days of hardware management and corporate datacenters, it takes a more contemporary approach. Security is baked into all the offered services taking a comprehensive and holistic view of cloud native security and reliability. At the core, though, of AWS security is Identity and Access Management. I hope this blog has been somewhat useful as an introduction to IAM and as always, I encourage feedback in the comment section.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AWS Global Infrastructure</title>
      <dc:creator>Michael Hyland</dc:creator>
      <pubDate>Tue, 25 Aug 2020 09:01:55 +0000</pubDate>
      <link>https://dev.to/theworldoftheweb/aws-global-infrastructure-k56</link>
      <guid>https://dev.to/theworldoftheweb/aws-global-infrastructure-k56</guid>
      <description>&lt;p&gt;Amazon Web Services (AWS) has a footprint on six of the seven continents of the world. This allows customers of AWS to deploy resources in locations that suite their specific needs and allowing users the ability to diversify resources in multiple regions for high availability and disaster recovery purposes. This short blog will focus on the global infrastructure of AWS, including regional frameworks.&lt;/p&gt;

&lt;h1&gt;
  
  
  Global Infrastructure
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--drN-6wVb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9d8ovcpnwfxd2mgao38m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--drN-6wVb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9d8ovcpnwfxd2mgao38m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS’s global infrastructure comprises of what is known as regions and availability zones. On the above map, the regions are represented by the yellow dots. Currently, there are 24 launched regions, three announced regions and 77 availability zones. The most recent region to come online as of August 2020, was Cape Town, South Africa. The regions still to come are Spain, Osaka in Japan and Jakarta, Indonesia. Most regions comprise of three or more availability zones with Beijing being the only region with two availability zones. AWS also offers a Content Delivery Network (CDN) called Cloudfront. The service amounts to 216 points of presence comprising of 205 edge locations and 11 regional edge caches.&lt;/p&gt;

&lt;h1&gt;
  
  
  Regions and Availability Zones
&lt;/h1&gt;

&lt;p&gt;Cloud providers use the terms "region" and "availability zone" in a different context and it’s important to understand how AWS uses these terms to describe its infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6aYZe_Zn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qiedsl6o2glqutdxejiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6aYZe_Zn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qiedsl6o2glqutdxejiy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A region is a geographical location around the world where clusters of data centers are found. These clusters of datacenters are called availability zones. Each region will have three or more availability zones. Each availability zone will comprise of one or more datacenters.&lt;/p&gt;

&lt;p&gt;AWS’s largest datacenter has over 300 000 individual servers and the largest availability zone has 14 datacenters. Within a region, availability zones are positioned close enough to allow synchronous operations with a latency of between one and two milliseconds between each availability zone.&lt;/p&gt;

&lt;p&gt;This allows synchronous replicational operations between services, for example database servers, as if they were in the same datacenter. This is important, as it is recommended that resources are deployed to multiple availability zones to ensure high availability of services.&lt;/p&gt;

&lt;p&gt;The reason that three or more availability zones are accessible and why it is recommended to deploy services in at least three availability zones is to mitigate down time if access to one or more availability zones are denied for any reason. This is illustrated in the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mvXg649P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ts4dqpcoxqxktsujlu2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mvXg649P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ts4dqpcoxqxktsujlu2c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Downtime can be described as a deployed service that is out of action or unavailable for use. An analogy can be a service elevator in a building. If the building comprises of only a single elevator and that elevator is out of order, rapid access to all floors in the building are unavailable and the tedious task of using the stairs is the only option.&lt;/p&gt;

&lt;p&gt;If this single elevator has an availability of 99%, over a period of one year it will be unavailable for 3 days and 15 hours. An availability of 99% may sound acceptable but it is unsatisfactory when this analogy is shifted to the realm of modern enterprise internet applications. Now, say a second elevator was added to the building, in parallel to the first, with the same percentage of availability, the downtime drops to 52 minutes over the same period of a year. Adding a third in parallel, downtime is reduced to 31 seconds. This is because, statistically, the chance of all three elevators being out of order at the same time becomes highly improbable (however not impossible) and will ensure an availability of 99.9999% or what is commonly known as six nines uptime.&lt;/p&gt;

&lt;p&gt;This illustrates the fact that the more services you run in parallel the better. When having more than three in parallel, however, the number of seconds one can shave begins to taper off. This concept is the main reason why AWS aims to always have three availability zones in each region.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I hope the reader has benefitted from this brief presentation on the concept of AWS regions and availability zones and why availability zones are designed the way that they are. Readers are encouraged to comment with their own contribution to the topic, general feedback and thoughts on anything they feel I have missed or stated incorrectly.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
