<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harris Geo 👨🏻‍💻</title>
    <description>The latest articles on DEV Community by Harris Geo 👨🏻‍💻 (@harrisgeo88).</description>
    <link>https://dev.to/harrisgeo88</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/harrisgeo88"/>
    <language>en</language>
    <item>
      <title>Hexagonal Architecture: A High Level Overview</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 27 Feb 2022 20:33:45 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/hexagonal-architecture-a-high-level-overview-174g</link>
      <guid>https://dev.to/harrisgeo88/hexagonal-architecture-a-high-level-overview-174g</guid>
      <description>&lt;p&gt;Hexagonal architecture is a great way to build structure to your system and split it into different layers each of which serves a specific purpose.&lt;/p&gt;

&lt;p&gt;Do not let the name trick you into thinking that it contains 6 pieces of logic. It is more of a representation of the multiple sides a hexagon has and makes it ideal for apps that have multiple connections with external systems. The hexagon is also a common component to use in UML diagrams.&lt;/p&gt;

&lt;p&gt;Now let’s talk about the 3 layers that make Hexagonal architecture.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adapters&lt;/li&gt;
&lt;li&gt;Ports&lt;/li&gt;
&lt;li&gt;Domain&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;
  &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F630705%2F155397373-918ec18b-e8a4-4f2d-ac97-8e1d8dd12cdb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F630705%2F155397373-918ec18b-e8a4-4f2d-ac97-8e1d8dd12cdb.jpeg"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Adapters
&lt;/h2&gt;

&lt;p&gt;The way I like to think of adapters is like the I/O of our app. How data reaches into our app and then where does this data go?&lt;/p&gt;

&lt;p&gt;That might be a HTTP endpoint that invokes our app, or an EventBridge event that our app is listening to. Then on the opposite end, once the app executes its business logic it has to do something with that data.&lt;/p&gt;

&lt;p&gt;A very common scenario is to store that data in a Database like DynamoDB, or MongoDB, or send a notification to the customer. Adapters can be anything that allows our app to have an inbound or outbound communication with the outside world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Domain
&lt;/h2&gt;

&lt;p&gt;When the data received within the app needs to be processed and execute some business logic, stuff like calculations, data reshaping and other internal to the app processes. This is the domain layer.&lt;/p&gt;

&lt;p&gt;Isolating the domain logic is a great practice for building resilient systems that not only can scale but also are easy to work with and modify. More of the latter later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ports
&lt;/h2&gt;

&lt;p&gt;The ports layer in my opinion is the part that causes the biggest amount of confusion in the whole concept of this architectural pattern. Let’s see if we can make some sense out of it. &lt;/p&gt;

&lt;p&gt;As we've already said, one of the selling points of Hexagonal architecture is the fact that it can make our app domain agnostic. What that means is that our business logic should be decoupled from the specific tools and infrastructure that we use. In other words, our domain should not be dependant of the specifics of the Database we use.&lt;/p&gt;

&lt;p&gt;Similarly, the domain should not know that we’re sending it data via an SQS queue. The port is the bridge that connects the domain with the adapter and holds the logic that decides what information should be passed from one to the other. In typed languages a port is usually an interface that specifies the shape of the data that adapters have to pass to the domain and vice versa.&lt;/p&gt;

&lt;h2&gt;
  
  
  A use case
&lt;/h2&gt;

&lt;p&gt;Let’s take a classic example where our application is a RESTful API which receives data via a HTTP POST endpoint and stores it to MongoDB. The journey would look like that&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F630705%2F155397207-c9248cc5-8e58-4550-ac9c-c44a8e96a6ad.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F630705%2F155397207-c9248cc5-8e58-4550-ac9c-c44a8e96a6ad.jpeg"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Our HTTP adapter will process the HTTP POST request and send the data to the port which will communicate that to the domain. This is where our internal logic will be executed. Stuff like internal calculations, reshaping data etc.&lt;/p&gt;

&lt;p&gt;Then we need to follow the same logic in reverse. The domain has some data, wants to store it in the DB and has to send them to the repository port which will then send it to the Database adapter.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Adapter (input)&lt;/th&gt;
&lt;th&gt;HTTP handler&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Port (to domain)&lt;/td&gt;
&lt;td&gt;HTTPHandler.retrieveData&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain&lt;/td&gt;
&lt;td&gt;Process data and send data to repository&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Port (to adapter)&lt;/td&gt;
&lt;td&gt;Repository.storeData&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Adapter (output)&lt;/td&gt;
&lt;td&gt;Connection with MongoDB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Benefits of Hexagonal architecture
&lt;/h2&gt;

&lt;p&gt;You’re probably wondering “yeah that’s cool but why would we go through all that trouble?”&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Flexibility meets structure
&lt;/h3&gt;

&lt;p&gt;To put Hexagonal architecture into a business perspective, it is painless when we want to introduce new features due to the loosely coupled way of structuring our code. We can change parts of our app without causing major disruption.&lt;/p&gt;

&lt;p&gt;In addition to that, our future selves will really thank us when it comes to debugging an error, as we will immediately know where to look.&lt;/p&gt;

&lt;p&gt;Did the app return the wrong data? That sounds like an issue in the domain layer. Was there a network issue during that request? Sounds like an adapter issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Isolated testing
&lt;/h3&gt;

&lt;p&gt;One of my favourite parts of Hexagonal architecture is that testing our code becomes much simpler. We all have experienced codebases that are really difficult to test due to their lack of boundaries where all of the implementation is just thrown into a function / method / class / whatever you want to name it, that is 100+ lines long.&lt;/p&gt;

&lt;p&gt;With Hexagonal architecture each layer is a separate module we can test in isolation. This can be done by mocking its communication with other layers, which gives us the flexibility to have smaller tests that are easier to write and faster to execute. Bonus point, that can then result with higher testing coverage.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Domain agnostic app
&lt;/h3&gt;

&lt;p&gt;The whole concept of “plug and play” adapters is great because it ensures our business logic does not rely the tools we use.&lt;/p&gt;

&lt;p&gt;The more specific our business logic is to a certain infrastructure, the more difficult it will be for us in the future to move away from this infrastructure.&lt;/p&gt;

&lt;p&gt;How many times have we had to spend days if not weeks trying to find out how to switch from Database A to Database B because our code is too tighly coupled to Database A. This tools logic leakage inside our domain is something we need to be careful about.&lt;/p&gt;

&lt;p&gt;Hexagonal architecture guides us on how to have clear boundaries between the tools and our business logic. Then once we decide to move away from a tool, it should be as simple as adding a new adapter.&lt;/p&gt;

&lt;p&gt;Obviously I am not saying that migrating away from tools is going to be a piece of cake, but the transition within our app, will probably be the smallest of our concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;For those of us working with Serverless apps, it is a known problem that we very have our entire logic inside the handler. Then slowly once our application starts getting bigger and bigger, we either end up with gigantic handlers or some weird structure which looks like that infrastructure logic is mixed within the business logic. This is where we need to introduce some boundaries and Hexagonal architecture can help us with it.&lt;/p&gt;

&lt;p&gt;I have to admit that the first time I tried to write some code using this pattern, it felt really weird. I think the biggest issue was not really understanding what kind of problem Hexagonal architecture is trying to solve. With time though it started making a lot more sense, and since then it has been the number one choise of structuring projects I've been working on.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>hexagonalarchitecture</category>
      <category>serverless</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 9, CloudFormation And CloudFront</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Mon, 26 Apr 2021 08:05:44 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-9-cloudformation-and-cloudfront-1010</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-9-cloudformation-and-cloudfront-1010</guid>
      <description>&lt;p&gt;This week we're going to talk about CloudFormation which is what we talked about in last week's Elastic Beanstalk blog that is used under the hood. After that, I also have a small introduction to CloudFront. Before I spoil anything else, let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1384131934142431235"&gt;CloudFormation&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This week we're going to talk about CloudFormation in AWS.&lt;/p&gt;

&lt;p&gt;What is CloudFormation? It way for declaring what AWS infrastructure you want provision in a template. We can create, configure and delete AWS components and also reference them with each other.&lt;/p&gt;

&lt;p&gt;The format is &lt;code&gt;AWS::Lambda::Function&lt;/code&gt; or &lt;code&gt;AWS::EC2::Instance&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;CloudFormation supports most AWS services and the full list can be found here&lt;/p&gt;

&lt;p&gt;&lt;a href="https://t.co/Fm29w2f2On?amp=1"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1384866273646022657"&gt;CloudFormation Parameters and more&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS CloudFormation parameters&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can specify inputs insure our templates which are great for reusing templates and values of services that we want to use after they are created&lt;/li&gt;
&lt;li&gt;That way we won't have to re-upload templates all the time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Referencing
&lt;/h3&gt;

&lt;p&gt;We can use referencing for using params anywhere within the template&lt;/p&gt;

&lt;p&gt;The API name is &lt;code&gt;Fn:Ref&lt;/code&gt; and in the yaml config it is shortened to &lt;code&gt;!Ref&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pseudo params
&lt;/h3&gt;

&lt;p&gt;We can also use pseudo params for AWS related values that we do not want to store in our code and again use them at any time. These can be like the AWS account id with &lt;code&gt;AWS::AccountId&lt;/code&gt;, the region with &lt;code&gt;AWS::Region&lt;/code&gt; and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mappings
&lt;/h3&gt;

&lt;p&gt;We also have mappings which are fixed variables useful when adding sets of hardcoded data in our code. An example is &lt;code&gt;FN:FindInMap&lt;/code&gt; which allows us to search within maps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DTWJjMvU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzgG6yKXEAQm-ca%3Fformat%3Djpg%26name%3Dlarge" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DTWJjMvU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzgG6yKXEAQm-ca%3Fformat%3Djpg%26name%3Dlarge" alt="https://pbs.twimg.com/media/EzgG6yKXEAQm-ca?format=jpg&amp;amp;name=large"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Outputs
&lt;/h3&gt;

&lt;p&gt;Then we have outputs which are optional but work really well when we want to use the value of a service that was just created in order to reference it to another resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HQ9z5BvV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzgIB7QWYAEC5Qn%3Fformat%3Djpg%26name%3Dmedium" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HQ9z5BvV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzgIB7QWYAEC5Qn%3Fformat%3Djpg%26name%3Dmedium" alt="https://pbs.twimg.com/media/EzgIB7QWYAEC5Qn?format=jpg&amp;amp;name=medium"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross Reference
&lt;/h3&gt;

&lt;p&gt;Then we have the cross stack reference where we create another template that uses a security group. We can reference that with &lt;code&gt;Fn::ImportValue&lt;/code&gt;. Once a stack is referenced in another template, all the references need to be deleted first before deleting the first stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--igBelTlk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzurkNBWUAU6iRu%3Fformat%3Djpg%26name%3Dmedium" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--igBelTlk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzurkNBWUAU6iRu%3Fformat%3Djpg%26name%3Dmedium" alt="https://pbs.twimg.com/media/EzurkNBWUAU6iRu?format=jpg&amp;amp;name=medium"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conditions
&lt;/h3&gt;

&lt;p&gt;We can control the creation of resources based on conditions. Such conditions are environment stage, AWS region etc&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conditions can reference other conditions, parameter values or mappings&lt;/li&gt;
&lt;li&gt;We have intrinsic function like and &lt;code&gt;Fn:And&lt;/code&gt;, equals &lt;code&gt;Fn:Equals&lt;/code&gt;, if &lt;code&gt;Fn:If&lt;/code&gt; etc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CqZ_772y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzusprrXIAAHznE%3Fformat%3Djpg%26name%3Dmedium" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CqZ_772y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EzusprrXIAAHznE%3Fformat%3Djpg%26name%3Dmedium" alt="https://pbs.twimg.com/media/EzusprrXIAAHznE?format=jpg&amp;amp;name=medium"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1385927266144718849"&gt;CloudFormation Rollbacks&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS CloudFormation rollbacks&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If a stack creation fails, by default all underlying resources get deleted&lt;/li&gt;
&lt;li&gt;We also have an option to disable that and troubleshoot the error&lt;/li&gt;
&lt;li&gt;If a stack update fails it automatically rolls back to the previous state that was working&lt;/li&gt;
&lt;li&gt;Same as when creating, we have the ability to see in the logs and debug what exactly went wrong&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1386397244530716674"&gt;CloudFront&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is AWS CloudFront? It is a CDN (Content Delivery Network) formed of distributions and is mainly used to improve site’s performance as content is cached on multiple edge locations around the world.It provides DDOS protection and integrates with AWS firewall Shield&lt;/p&gt;

&lt;p&gt;CloudFront can provide origins from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 buckets for distributing and caching files at the edge&lt;/li&gt;
&lt;li&gt;Other custom origins like ALB, EC2, S3 websites and any HTTP backend you want&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, clients send requests to any of the multiple edges around the world where the requests get forwarded to the origin along with any query params and headers. Then the origin responds with the available assets which then get cached in the edge location for future requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Both CloudFormation and CloudFront are services that I have not used much in the past but I understand their importance in our AWS stack. CloudFormation is definitely a service that requires a deeper dive with some coding examples (maybe a future blog post) to really understand its value.&lt;/p&gt;

&lt;p&gt;Next week we're finally going to talk about the most interesting service in AWS for being a developer. Shall I ruin the surprise? Whatever. Next week will be about AWS Lambda!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>cloudformation</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 8, Elastic Beanstalk</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 18 Apr 2021 19:53:31 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-8-elastic-beanstalk-1hcb</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-8-elastic-beanstalk-1hcb</guid>
      <description>&lt;p&gt;This has been an exciting week as London is starting to open up little by little. We agreed with the rest of my team at work that whoever wants can start to the office twice per week. Finally after a long winter of staying at home the whole day every day, we can start seeing some people again. The only downside of it is that it broke my routine of tweeting about AWS services.&lt;/p&gt;

&lt;p&gt;In this blog post we're going to talk about Beanstalk which is a really interesting service as it can glue the majority of the stuff you need when creating an environment. I don't understand why not everyone uses it. &lt;/p&gt;

&lt;p&gt;Definitely once I finish this project and get my certification this will be one of the first services that I'm going to dive deep into it. Without further ado let's check it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1381582874453942272"&gt;AWS Beanstalk&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is AWS Elastic Beanstalk?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A platform as a service for deploying applications to AWS&lt;/li&gt;
&lt;li&gt;It is a layer for configuring how to use other services like EC2, Auto Scaling Groups, Load Balancers, RDS etc.&lt;/li&gt;
&lt;li&gt;Using Elastic Beanstalk is free but you only pay for the underlying resources&lt;/li&gt;
&lt;li&gt;Elastic Beanstalk is a managed service and can also be used for deployment strategies&lt;/li&gt;
&lt;li&gt;The idea behind it is that the developer is responsible for the code and Beanstalk for the infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  There are 3 architecture models
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Single Instance Deployment which is great for dev environments&lt;/li&gt;
&lt;li&gt;Load Balancer with Auto Scaling Groups which is the standard model for production web apps&lt;/li&gt;
&lt;li&gt;Auto Scaling Groups only which is mainly for analytics and workers services&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can version our applications to environments and promote them to the next environment until we reach production. We can customise these stages to whatever we want. e.g. dev - staging - prod. Rollback feature is also available&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1382320050892328967"&gt;Deployment options&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  All at once
&lt;/h3&gt;

&lt;p&gt;We have the "&lt;strong&gt;all at once&lt;/strong&gt;" option where you can deploy all instances in one go.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This option has downtime but it is the fastest way to deploy&lt;/li&gt;
&lt;li&gt;It is great for dev environments that require quick iterations and also there are no additional costs to it&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Rolling
&lt;/h3&gt;

&lt;p&gt;We have the &lt;strong&gt;rolling&lt;/strong&gt; option where slow update the current instances with new once until our application only contains the new code. Let's say our app has 4 instances. 2 of them are going to be updated (below capacity) with the new version and then the next 2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rolling with additional batches
&lt;/h3&gt;

&lt;p&gt;We have the &lt;strong&gt;rolling with additional batches&lt;/strong&gt; option. Here we use the same logic as before but instead of updating the current instances, we add a few extra. The deployment is longer, has small extra costs but is good for production&lt;/p&gt;

&lt;h3&gt;
  
  
  Immutable
&lt;/h3&gt;

&lt;p&gt;We have the &lt;strong&gt;immutable&lt;/strong&gt; option where we spin up a complete new set of instances (double the amount in total) and once the new version is out and running, we terminate the old ones. This option has 0 downtime, is great for prod but is quite costly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blue / Green deployments
&lt;/h3&gt;

&lt;p&gt;Finally we have the &lt;strong&gt;blue / green deployment&lt;/strong&gt; option&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We create a new env with the new app version (green) and direct 10% of the traffic to it&lt;/li&gt;
&lt;li&gt;The old env (blue) will handle 90% of the traffic&lt;/li&gt;
&lt;li&gt;We setup weighted policies in Route53&lt;/li&gt;
&lt;li&gt;Once we are happy, Beanstalk can swap urls&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1383032620066439172"&gt;Beanstalk under the hood&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk a bit more about how AWS Beanstalk works under the hood. It basically relies on AWS CloudFormation to provision any other AWS services (Infrastructure As Code). To do that we can define a .ebextensions folder inside which we provision any service we want&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1383800508247277576"&gt;Running Docker with Beanstalk&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Single Docker Container
&lt;/h3&gt;

&lt;p&gt;Single Docker for simple setups where we run our app as a single Docker container.&lt;/p&gt;

&lt;p&gt;We provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;Dockerfile&lt;/code&gt; which will be used to build and run our container&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;Dockerrun.aws.json&lt;/code&gt; v1 file for existing images which can be in ECR or Dockerhub&lt;/li&gt;
&lt;li&gt;Uses EC2 under the hood&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Multi Docker Container
&lt;/h3&gt;

&lt;p&gt;Multi Docker which runs multiple containers per EC2.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It will create and ECS cluster, EC2 instances for it, a Load Balancer in High Availability mode, task definitions and execution.&lt;/li&gt;
&lt;li&gt;It requires a &lt;code&gt;Dockerrun.aws.json&lt;/code&gt; v2 config file at the root of the project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Multi Docker also uses the &lt;code&gt;Dockerrun.aws.json&lt;/code&gt; v2 config file to generate the ECS task definition. We need to have our docker images prebuilt and stored in ECR&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I really don't understand why not more people use Beanstalk. It really simplifies the whole deployment process and putting all the services together.&lt;/p&gt;

&lt;p&gt;On the surface this looks really really simple but Beanstalk is just the tip of the iceberg. Next week we are going to talk about CloudFormation and a little bit of Cloudfront.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>beanstalk</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 7, ECR, ECR and Fargate</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 11 Apr 2021 21:19:25 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-7-ecr-ecr-and-fargate-216k</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-7-ecr-ecr-and-fargate-216k</guid>
      <description>&lt;p&gt;How's everyone doing? This section I have to admit is the one that I found the most difficult to understand all of the concepts within ECS so I hope that my notes will help explaining it in a simple way. Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1379041394141003781"&gt;ECS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What AWS ECS (Elastic Container Service)?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS is Amazon's platform for managing Docker containers.&lt;/li&gt;
&lt;li&gt;ECS clusters are groups of EC2 instances that run the ECS agent for Docker containers which then register the instance to the cluster.&lt;/li&gt;
&lt;li&gt;EC2 instances run an AMI designed for ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1379406049552310272"&gt;ECS Task Definitions, Services and Clusters&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Task Definitions
&lt;/h3&gt;

&lt;p&gt;Task Definitions are metadata in JSON format that give instructions to ECS on how to run a Docker container.&lt;/p&gt;

&lt;p&gt;The contain crucial information about image names, port bindings for container and host, memory and cpu required, environment variables and networking information&lt;/p&gt;

&lt;h3&gt;
  
  
  ECS Services
&lt;/h3&gt;

&lt;p&gt;We can configure the way tasks run and also how many should run and spread them amongst your EC2 instances and can also be linked to load balancers&lt;/p&gt;

&lt;h3&gt;
  
  
  Clusters
&lt;/h3&gt;

&lt;p&gt;We need to create an ECS Cluster in which we add services and task definitions&lt;/p&gt;

&lt;p&gt;Clusters can be of type EC2 or Fargate. For EC2 Clusters the corresponding EC2 instance are automatically created along with an autoscaling group&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A task definition depends on an ECS service in order to run&lt;/li&gt;
&lt;li&gt;Creating an ECS service also adds a Docker container image for it inside the EC2 instances&lt;/li&gt;
&lt;li&gt;We can update the number of tasks in the ECS service and corresponding ASG group if we want to scale&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1379770448989155347"&gt;ECR&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS ECR (Elastic Container Registry) which is a private repo for storing our Docker images&lt;/p&gt;

&lt;p&gt;It is used for creating custom images locally and then push them to ECR so that they are available to use in ECS&lt;/p&gt;

&lt;p&gt;To create a repo, in our CLI we need to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Authenticate our Docker client to the ECR registry&lt;/li&gt;
&lt;li&gt;Build our Docker images&lt;/li&gt;
&lt;li&gt;Tag the image&lt;/li&gt;
&lt;li&gt;Push the image into ECR&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1380130571800612872"&gt;Fargate&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS ECS with Fargate&lt;/p&gt;

&lt;p&gt;Fargate is a Serverless way of launching ECS Clusters&lt;/p&gt;

&lt;p&gt;We only create the task definitions and AWS will run our containers. To scale we only increase the task number&lt;/p&gt;

&lt;p&gt;We don't have to worry about managing EC2 instances anymore&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1380494469494046720"&gt;ECS Task placement&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about ECS tasks placement&lt;/p&gt;

&lt;p&gt;Task placement strategy: Determine where to place newly launched EC2 type tasks. Task placement constraints: Based on CPU, memory and available port. The same logic constraints also apply when scaling in and need to terminate tasks&lt;/p&gt;

&lt;p&gt;Now about Task placement strategies we have&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Binpack: Based on the least available amount of memory or CPU which is great for cost savings&lt;/li&gt;
&lt;li&gt;Random: without any logical order&lt;/li&gt;
&lt;li&gt;Spread: Based on specified value like instanceId&lt;/li&gt;
&lt;li&gt;Mix them together&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Most of these concepts mentioned above may not make perfect sense if you don't see them being applied on the AWS console but still they are good theory to know. Maybe that can be a future version of this sets of tutorials :)&lt;/p&gt;

&lt;p&gt;Next week we're talking about Elastic Beanstalk where things start getting more interesting as we can see combination of several services.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecs</category>
      <category>ecr</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 6, Advanced S3, Glacier And Athena</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Mon, 05 Apr 2021 15:13:04 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-6-advanced-s3-glacier-and-athena-2cbb</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-6-advanced-s3-glacier-and-athena-2cbb</guid>
      <description>&lt;p&gt;Week 6 of my AWS learning journey. This week is diving deeper into what S3 can provide us. We're going to do a quick overview of a couple new services Glacier and Athena. Let's go into the details.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1376504676527468546"&gt;S3 Replication&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS S3 Replication&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To do that we must enable versioning in both source and destination buckets&lt;/li&gt;
&lt;li&gt;We have Cross Region Replication (CRR) which is ideal for compliance, lower latency access and also when you want to replicate across accounts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we have same region replication (SRR) which can be for log aggregation or live replication between production and test accounts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In either case, buckets can be in different accounts&lt;/li&gt;
&lt;li&gt;Copy is asynchronous and we must give proper IAM permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After enabling S3 Replication, only new objects are replicated and not everything.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When deleting. Deletes without version ID add a delete marker which is not replicated.&lt;/li&gt;
&lt;li&gt;Deleting with a version ID, it deletes in the source and is not replicated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We cannot do "chaining" of replication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;That means that if Bucket A has a replication into Bucket B which then has a replication into Bucket C and we add an object into bucket A, it won't make it all the way to Bucket C&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1376867066246881282"&gt;S3 pre-signed URLs&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS S3 pre-signed URLs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can generate links that have the same permissions on the file as when we open it via the AWS console.&lt;/li&gt;
&lt;li&gt;Such links can be generated using the CLI (downloads) or SDK (uploads)&lt;/li&gt;
&lt;li&gt;By default they are valid for 3600 seconds but we can change the timeout with the &lt;code&gt;--expires-in x&lt;/code&gt; seconds argument&lt;/li&gt;
&lt;li&gt;We can use them for cases like

&lt;ul&gt;
&lt;li&gt;Share link to content only with logged in users&lt;/li&gt;
&lt;li&gt;Allow temporary actions for users&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the CLI command for it&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pbs.twimg.com/media/Exuf1ucXAAM8HgC?format=jpg&amp;amp;name=medium"&gt;https://pbs.twimg.com/media/Exuf1ucXAAM8HgC?format=jpg&amp;amp;name=medium&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1377251967987286024"&gt;S3 Storage classes and Glacier&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We're going to do a quick overview of AWS S3 Storage classes and Glacier!&lt;/p&gt;

&lt;h3&gt;
  
  
  1. S3 standard which is a general purpose with high durability across multiple AZs.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;99.99% availability throughout the year.&lt;/li&gt;
&lt;li&gt;S3 standard is great for Data analytics, mobile &amp;amp; gaming applications and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. S3 Standard-Infrequent Access (IA)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;For data that are less frequently accessed but require almost instant access.&lt;/li&gt;
&lt;li&gt;It is also high durability across multiple AZs&lt;/li&gt;
&lt;li&gt;It has 99.9% availability&lt;/li&gt;
&lt;li&gt;IA is good for Disaster recovery, backups etc&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. S3 One Zone-Infrequent Access
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The same principle but only in a single AZ but 20% cheaper&lt;/li&gt;
&lt;li&gt;It has 99.5% availability&lt;/li&gt;
&lt;li&gt;Supports SSL for data in transit and encryption at rest&lt;/li&gt;
&lt;li&gt;Good for secondary backups and other data you can recreate&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. S3 Intelligent tiering
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Similar to S3 standard but automatically moves objects between tiers based on access patterns&lt;/li&gt;
&lt;li&gt;It has a small monthly monitoring and auto-tiering fee&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Glacier
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Low cost and meant for archiving or backups&lt;/li&gt;
&lt;li&gt;Good for long term retention like 10s of years&lt;/li&gt;
&lt;li&gt;Archives are stored in vaults&lt;/li&gt;
&lt;li&gt;There is a cost to retrieve which gets more expensive for faster retrievals (1 minute to 12 hours)&lt;/li&gt;
&lt;li&gt;The minimum storage duration is 90 days.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Glacier Deep Archive
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;For looong term storage and really cheaper than all the other options&lt;/li&gt;
&lt;li&gt;However the fastest way to retrieve vaults is 12 hours&lt;/li&gt;
&lt;li&gt;The minimum storage duration is 180 days (half a year)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To give you an idea of the costs, for S3 standard it is $0.023 per GB. For deep Glacier it is $0.00099 per GB&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/s3/pricing/"&gt;https://aws.amazon.com/s3/pricing/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1377600772687466496"&gt;S3 lifecycle rules&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS S3 lifecycle rules&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can move between storage classes based on how often we access our objects.&lt;/li&gt;
&lt;li&gt;For infrequently accessed object, move them to standard IA&lt;/li&gt;
&lt;li&gt;For archives and object we don't need instantly we can move them to Glacier or Deep Archive&lt;/li&gt;
&lt;li&gt;We can even automate that with transition actions which are definitions when objects are transitioned to another storage class

&lt;ul&gt;
&lt;li&gt;Example: Move objects to Standard IA class 60 days after creation. Then archive them to Glacier after 6 months&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;We can also set expiration actions where we configure object to be deleted after a set amount of time

&lt;ul&gt;
&lt;li&gt;Example:  Access log files can be set to delete after 365 days. Such files can be old versions or incomplete multi-part uploads&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;We can create these rules based on prefixes like &lt;code&gt;s3://somebucket/archives/*&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;We can also add rules based on certain objects tags like Department: Sales&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1378316616996286464"&gt;AWS Athena&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Today let's talk about AWS Athena&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is a Serverless service to perform analytics directly against S3 files and uses a SQL language to query these files&lt;/li&gt;
&lt;li&gt;It supports CSV, JSON and more.&lt;/li&gt;
&lt;li&gt;Athena is quite common for cases like BI, analytics, reporting, ELB and CloudTrail logs and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I have to admit, most of these features are not really that common out there. In my last 2-3 jobs I have only seen a very basic way of using S3 comparing to what we discuss here. However, based on the examples these features must definitely be used by companies especially when there are so many different storing tiers.&lt;/p&gt;

&lt;p&gt;Next week we are going to talk about the service that caused me to want to learn more about how AWS works. That is ECS where we are going to talk about managing Docker containers, ECR and Fargate.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>s3</category>
      <category>glacier</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 5, S3</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 28 Mar 2021 19:56:08 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-5-s3-51bd</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-5-s3-51bd</guid>
      <description>&lt;p&gt;This week we're going to talk about S3. S3 is the service that the majority of developers are the most familiar with. Yet it is a service that can go really deep and has some crazy features that are not that known to the public. For that reason, I have split it in 2 parts.&lt;/p&gt;

&lt;p&gt;In the first one we're going to talk about the basics, along with terminology and some concepts like CORS and more. Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1373620676485992448"&gt;S3 Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Time for AWS S3 (Simple Storage Service)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A service to store objects (files) in buckets (directories)&lt;/li&gt;
&lt;li&gt;Each bucket is created on a region level and its name needs to be globally unique&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1373709764727689219"&gt;S3 Objects&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS S3 Objects (files)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Everything after the bucket name is called a key&lt;/li&gt;
&lt;li&gt;It is composed by a prefix and the object name&lt;/li&gt;
&lt;li&gt;The UI will trick you that S3 has a concept of directories within buckets but it doesn't&lt;/li&gt;
&lt;li&gt;Each object is max 5GB, otherwise multi-part upload&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WKBF76tK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/ExBlwG9U8AERp1S%3Fformat%3Dpng%26name%3Dsmall" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WKBF76tK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/ExBlwG9U8AERp1S%3Fformat%3Dpng%26name%3Dsmall" alt="https://pbs.twimg.com/media/ExBlwG9U8AERp1S?format=png&amp;amp;name=small"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1373985082399019010"&gt;S3 Versioning&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about versioning in AWS S3&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It can be enabled on a bucket level and same key will increment its version by 1,2,3&lt;/li&gt;
&lt;li&gt;It is a good practice that can protect against accidental deletes and we can easily rollback&lt;/li&gt;
&lt;li&gt;Files without prior version will have version set to null&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m6H7grcf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/ExFgJqTVgAIiQR1%3Fformat%3Djpg%26name%3Dlarge" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m6H7grcf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/ExFgJqTVgAIiQR1%3Fformat%3Djpg%26name%3Dlarge" alt="https://pbs.twimg.com/media/ExFgJqTVgAIiQR1?format=jpg&amp;amp;name=large"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1374347463570591745"&gt;S3 Object Encryption&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  SSE-S3. It encrypts S3 objects using keys handled and managed by AWS.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It is a server side encryption with the AES-256 algorithm and requires the header &lt;code&gt;"x-amz-server-side-encryption": "AES256"&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SSE-KMS which is managed by KMS (Key management Service).
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Here we can manage access and also create an audit trail with access history.&lt;/li&gt;
&lt;li&gt;It is also a server side encryption and requests the header &lt;code&gt;"x-amz-server-side-encryption": "aws:kms"&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SSE-C which is for managing your own keys. S3 doesn't store the key you provide.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It is HTTPS only and the encryption key is required in the header in every request&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Finally we have the Client Side Encryption which requires the a library to manually be configured on the client side.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Such library can be Amazon S3 Encryption Client.&lt;/li&gt;
&lt;li&gt;For that encryption type, the client needs to encrypt their data before upload and decrypt them after download&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1374708346742140928"&gt;S3 Security&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Today let's talk about AWS S3 Security.&lt;/p&gt;

&lt;p&gt;We can have&lt;/p&gt;

&lt;h3&gt;
  
  
  👤 User based.
&lt;/h3&gt;

&lt;p&gt;IAM policies for API calls allowed per user from the IAM console&lt;/p&gt;

&lt;h3&gt;
  
  
  🪣 Resource based.
&lt;/h3&gt;

&lt;p&gt;Bucket policies that allow cross account access&lt;br&gt;
Object ACL (Access control list) for finer grain&lt;br&gt;
Bucket ACL less common&lt;/p&gt;

&lt;h3&gt;
  
  
  An IAM principal can access an S3 object if
&lt;/h3&gt;

&lt;p&gt;The user IAM permissions allow it&lt;/p&gt;

&lt;p&gt;or the resource policy allows it&lt;/p&gt;

&lt;p&gt;and there is no explicit deny&lt;/p&gt;

&lt;h3&gt;
  
  
  We can enable MFA delete.
&lt;/h3&gt;

&lt;p&gt;This needs the bucket to be version and will ask the user to add an MFA code in order to delete an object.&lt;/p&gt;

&lt;p&gt;This can only be enabled from the AWS CLI&lt;/p&gt;

&lt;h3&gt;
  
  
  Finally we have pre-signed URLs.
&lt;/h3&gt;

&lt;p&gt;These are URLs that give the same permissions as the user generating them and are valid for limited time.&lt;/p&gt;

&lt;p&gt;These can also only be generated from the AWS CLI&lt;/p&gt;

&lt;p&gt;An common use case is websites that offer premium video service for logged in users&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.notion.so/AWS-learn-in-public-week-5-S3-bf3a5f7a58214419bc129cac30251ad1"&gt;S3 bucket policies&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk a bit more about AWS S3 bucket policies&lt;/p&gt;

&lt;h3&gt;
  
  
  They are JSON based policies for
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Resources like buckets and objects&lt;/li&gt;
&lt;li&gt;Actions like set of API  to allow or deny&lt;/li&gt;
&lt;li&gt;Principal where the account or user to apply the policy to&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The AWS recommended way for creating policies is via the policy generator
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://t.co/H1yBJTHP2X?amp=1"&gt;https://awspolicygen.s3.amazonaws.com/policygen.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Examples of policies.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Grant public access to the bucket&lt;/li&gt;
&lt;li&gt;Force upload encryption for objects&lt;/li&gt;
&lt;li&gt;Grant access to another account (cross account)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1375070983908777995"&gt;CORS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about the CORS concept and how is it applied in AWS S3&lt;/p&gt;

&lt;p&gt;CORS stands for Cross-Origin Resource sharing and is a web browser based mechanism to allow requests to other to other origins while visiting the main origin&lt;/p&gt;

&lt;p&gt;CORS has 3 components. &lt;/p&gt;

&lt;p&gt;For &lt;a href="https://t.co/yGseWkbAye?amp=1"&gt;https://example.com&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Protocol which in this case is HTTPS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Host which is &lt;a href="https://t.co/AhoRncOJoT?amp=1"&gt;http://example.com&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Port which is 443 due to HTTPS (80 for HTTP)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Websites on the same domain and subdomain are same origin&lt;/p&gt;

&lt;p&gt;Like &lt;a href="https://t.co/6ysGHnS6Nf?amp=1"&gt;https://example.com/hello&lt;/a&gt; or &lt;a href="https://t.co/5EJFnlQtYz?amp=1"&gt;https://example.com/about&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Different origins are all the rest like &lt;a href="https://t.co/I5BX7ylFLA?amp=1"&gt;https://client.example.com&lt;/a&gt; and &lt;a href="https://t.co/rFzGsWEfv0?amp=1"&gt;https://api.example.com&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  For S3 CORS
&lt;/h3&gt;

&lt;p&gt;If a client does a cross-origin request on our S3 buckets, we need to enable the correct CORS headers&lt;br&gt;
We can add * to allow all origins or specify the one we want&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1375434377019019269"&gt;S3 Consistency model&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about the AWS S3 consistency model.&lt;/p&gt;

&lt;p&gt;That problem is due to data replication after we add new objects.&lt;/p&gt;

&lt;p&gt;Sometimes if we query that object straight away instead of a 200, we may get a 404.&lt;/p&gt;

&lt;p&gt;This is called "eventually consistent"&lt;/p&gt;

&lt;p&gt;Eventual consistency can also happen when updating or deleting objects.&lt;/p&gt;

&lt;p&gt;For updates we might get the old version of the object and for deletes we might still get a 200.&lt;/p&gt;

&lt;p&gt;Here's how to enable "strong consistency" &lt;a href="https://aws.amazon.com/s3/consistency/"&gt;https://aws.amazon.com/s3/consistency/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1375796008945258501"&gt;MFA Delete&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We can now go a bit deeper and talk about some more advanced AWS S3 concepts and more specifically MFA Delete.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is a security setting which requires the user to put their MFA code in order to delete an object&lt;/li&gt;
&lt;li&gt;It can only be enabled or disabled via the CLI of the AWS root account and requires bucket versioning to be enabled.&lt;/li&gt;
&lt;li&gt;MFA code will be required only for permanently deleting an object version or suspending versioning on the bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.notion.so/AWS-learn-in-public-week-5-S3-bf3a5f7a58214419bc129cac30251ad1"&gt;S3 Access Logs&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS S3 access logs&lt;/p&gt;

&lt;p&gt;For audit purposes, we can log all access to S3 buckets&lt;/p&gt;

&lt;p&gt;Any request to S3 from any account authorised or denied will be logged into another S3 bucket&lt;/p&gt;

&lt;p&gt;We can then use Amazon Athena (or other data analysis tools) to analyse these data&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A huge warning here is to not enable the same bucket for monitoring and logging&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is because we can end up in an infinite loop and wake up to a gigantic bill due to the exponential growth of the bucket size&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I really enjoyed learning more about S3. I now feel comfortable that I can do some more advanced configuration with S3 and feel confident using it. Where there any facts out of the ones I posted today you would like to add to? Next week is getting deeper into S3 and we will also talk about Glacier and Athena.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>s3</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 4, Route53 and VPC</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 21 Mar 2021 19:34:05 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-4-route53-and-vpc-4k4o</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-4-route53-and-vpc-4k4o</guid>
      <description>&lt;p&gt;The AWS learning challenge continues. This week was about Route53 which is literally the first time I touch this service and then some deeper dive into VPCs. Route53 gives a lot of flexibility and customisation but it is quite expensive for buying new domains and also you have to pay to transfer your domain which was a bummer. Nevertheless I gave it a try and it now makes a lot of sense.&lt;/p&gt;

&lt;p&gt;Afterwards, in the VPC section there were some a bit advanced topic which I have to admit are difficult to make sense in a tweet. One of the biggest advantages I have seen so far is that it is a lot easier to follow devops related conversations and understand why certain decisions were made. Enough with me blabbing let's go to the content.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1371115230886047747"&gt;Route53&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s talk about AWS Route53&lt;/p&gt;

&lt;p&gt;It is a managed DNS (Domain Name System) service which contains a collection of rules and records for reaching a server through its domain name&lt;/p&gt;

&lt;p&gt;The most common records include&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A which is a hostname to IPv4&lt;/li&gt;
&lt;li&gt;AAAA which is hostname to IPv6&lt;/li&gt;
&lt;li&gt;CNAME which is hostname to hostname&lt;/li&gt;
&lt;li&gt;Alias which is hostname to an AWS resource like ELB&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1371461445804728321"&gt;CNAME vs Alias&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's do a comparison of when to use CNAME vs Alias in AWS Route53&lt;/p&gt;

&lt;p&gt;AWS resources expose a hostname that looks like&lt;/p&gt;

&lt;p&gt;&lt;a href="https://t.co/zfBMViUuAG?amp=1"&gt;http://abc-123.eu-west-2.elb.amazonaws.com&lt;/a&gt;. You want that to point to &lt;a href="https://t.co/Lo12v7KFyd?amp=1"&gt;http://yoursite.example.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CNAME points a hostname to another hostname. Only for non root domains&lt;/p&gt;

&lt;p&gt;e.g. &lt;a href="https://t.co/Lo12v7KFyd?amp=1"&gt;http://yoursite.example.com&lt;/a&gt; -&amp;gt; &lt;a href="https://t.co/QzZtcZ3TD1?amp=1"&gt;http://another.site.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An Alias points a hostname to an AWS resource, is free and provides native health check&lt;/p&gt;

&lt;p&gt;e.g. &lt;a href="https://t.co/QzZtcZ3TD1?amp=1"&gt;http://another.site.com&lt;/a&gt; -&amp;gt; &lt;a href="https://t.co/zfBMViUuAG?amp=1"&gt;http://abc-123.eu-west-2.elb.amazonaws.com&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1371549780887801859"&gt;Route 53 Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's do an overview of AWS Route53&lt;/p&gt;

&lt;p&gt;It can use&lt;/p&gt;

&lt;p&gt;🔓 Public domain names you own&lt;/p&gt;

&lt;p&gt;🔒 Private domains within your VPC&lt;/p&gt;

&lt;p&gt;Advanced features&lt;/p&gt;

&lt;p&gt;⚖ Client load balancing (through DNS)&lt;/p&gt;

&lt;p&gt;🩺 Health checks&lt;/p&gt;

&lt;p&gt;📃 Multiple routing policies&lt;/p&gt;

&lt;p&gt;Each site lives in a hosted zone which is $0.50 per month&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1371807978114838531"&gt;Route53 Routing policies&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS Route53 routing policies&lt;/p&gt;

&lt;h3&gt;
  
  
  Weighted routing policy
&lt;/h3&gt;

&lt;p&gt;Control the % of the requests that go to specific endpoints&lt;/p&gt;

&lt;p&gt;Helpful for AB testing&lt;/p&gt;

&lt;p&gt;Helpful for splitting traffic between regions&lt;/p&gt;

&lt;p&gt;Can be associated with health checks&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency routing policy
&lt;/h3&gt;

&lt;p&gt;Redirect to the server that has the least latency close to us&lt;/p&gt;

&lt;p&gt;Latency of users is a priority&lt;/p&gt;

&lt;p&gt;Latency is evaluated in terms of users designated AWS region&lt;/p&gt;

&lt;p&gt;Some from Greece can be redirected to the UK is the latency is lower&lt;/p&gt;

&lt;h3&gt;
  
  
  Failover Routing Policy
&lt;/h3&gt;

&lt;p&gt;Route53 does a health check to the primary instance&lt;/p&gt;

&lt;p&gt;If it is unhealthy then Route53 will failover to the secondary instance (Disaster Recovery)&lt;/p&gt;

&lt;h3&gt;
  
  
  Geo Location routing policy
&lt;/h3&gt;

&lt;p&gt;Routing based on location&lt;/p&gt;

&lt;p&gt;Different from latency based policy&lt;/p&gt;

&lt;p&gt;Traffic form the UK should go to specific IP&lt;/p&gt;

&lt;p&gt;Should create a default policy in case there's no match&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi Value routing policy
&lt;/h3&gt;

&lt;p&gt;Use when routing traffic to multiple resources&lt;/p&gt;

&lt;p&gt;Want to associate Route53 health checks with records&lt;/p&gt;

&lt;p&gt;Up to 8 healthy records are returned for each Multi Value query&lt;/p&gt;

&lt;p&gt;Multi Value is not a substitute of having an ELB&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1371913926414245890"&gt;Route53 health checks&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Finally let's talk about health checks in AWS Route53&lt;/p&gt;

&lt;p&gt;We ping a server and expect it to respond&lt;/p&gt;

&lt;p&gt;A server becomes unhealthy when X health checks fail (default is 3)&lt;/p&gt;

&lt;p&gt;Then when Y health checks pass it is marked as healthy (default is 3)&lt;/p&gt;

&lt;p&gt;The default interval is 30s between each health check but can be set to 10s but it's more expensive&lt;/p&gt;

&lt;p&gt;There are around 15 health checkers that will check the endpoint health&lt;/p&gt;

&lt;p&gt;That means that there's 1 request every 2 seconds for the default interval&lt;/p&gt;

&lt;p&gt;There are HTTP, TCP and HTTPS healthchecks and don't use SSL verification&lt;/p&gt;

&lt;p&gt;We can integrate them with CloudWatch&lt;/p&gt;

&lt;p&gt;As we saw in the routing policies, health checks can be linked to Route53 DNS queries&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1372171376719667200"&gt;VPC Summary&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's do a summary of VPC components in AWS&lt;/p&gt;

&lt;p&gt;☁️ VPC: A private network (Virtual Private Cloud) to deploy your resources&lt;/p&gt;

&lt;p&gt;🆎 AZ: Availability Zones, different data centres in a AWS region&lt;/p&gt;

&lt;p&gt;📡 Internet Gateway: At the VPC level to provide Internet Access&lt;/p&gt;

&lt;p&gt;🚦 NAT Gateway / Instances: Giving internet access to private subnets&lt;/p&gt;

&lt;p&gt;🚥 NACL: Stateless subnet rules for inbound and outbound traffic&lt;/p&gt;

&lt;p&gt;🛡 Security Groups: Stateful rules that operate at the EC2 instance level or ENI (Elastic Network Interfaces)&lt;/p&gt;

&lt;p&gt;🤝 VPC Peering: Connect 2 VPCs with non overlapping IP ranges&lt;/p&gt;

&lt;p&gt;📃 VPC Flow logs: Logs for network traffic&lt;/p&gt;

&lt;p&gt;🔐 Site to Site VPN: VPN over public internet between on-premises data centre and AWS&lt;/p&gt;

&lt;p&gt;🔌 Direct Connect: Direct private connection to AWS&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1372273298210316290"&gt;3 tier solution architecture&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about a typical 3 tier solution architecture in AWS&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Public subnet
&lt;/h3&gt;

&lt;p&gt;Route53 talking to our ELB&lt;br&gt;
Our public ELB&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Private subnet
&lt;/h3&gt;

&lt;p&gt;Our EC2 Instances&lt;br&gt;
ASG amongst AZs&lt;br&gt;
ELB connects to EC2 instances using route tables&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Data subnet
&lt;/h3&gt;

&lt;p&gt;RDS &amp;amp; ElastiCache&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1372555909772144647"&gt;Internet Gateways and NAT Gateways / instances&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS IGWs (Internet Gateway) and NAT gateways / instances&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IGWs help our VPC instances connect to the internet with Public Subnets&lt;/li&gt;
&lt;li&gt;NAT Gateways and NAT instances allow our VPC instances in your Private Subnets to access the internet while remaining private&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In simple words&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If we want to give internet access to a VPC in a Public Subnet then we need a IGW&lt;/li&gt;
&lt;li&gt;If the VPC is in a Private Subnet then we need a NAT gateway (AWS managed) or a NAT instance (self managed)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1372622600149426176"&gt;Security Groups and NACL&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS Network security with NACLs (Network ACL) and SGs (Security Group)&lt;/p&gt;

&lt;p&gt;🔥 Network ACL is a firewall which controls traffic to and from subnets&lt;/p&gt;

&lt;p&gt;🚦 Supports Allow and Deny rules&lt;/p&gt;

&lt;p&gt;📗 Attached at the subnet level and only allow rules for IP addresses&lt;/p&gt;

&lt;p&gt;Security groups are firewalls which control traffic to and from an ENI (Elastic Network Interface) or EC2 instance&lt;/p&gt;

&lt;p&gt;✅ Can only have ALLOW rules&lt;/p&gt;

&lt;p&gt;📒 Can include IP addresses and other security groups&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1372898163858632713"&gt;Security Groups VS NACL&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's do a quick comparison of AWS SG (Security Groups) vs NACL (Network Access Control Lists)&lt;/p&gt;

&lt;h3&gt;
  
  
  For SGs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Operate at the instance level&lt;/li&gt;
&lt;li&gt;Support allow rules only&lt;/li&gt;
&lt;li&gt;Stateful: Traffic is automatically allowed regardless of any rules&lt;/li&gt;
&lt;li&gt;Rules need to be evaluated before deciding whether to allow traffic&lt;/li&gt;
&lt;li&gt;Can be applied when launching the instance or can be edited later on&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For NACLs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Operate at the subnet level&lt;/li&gt;
&lt;li&gt;Support allow and deny rules&lt;/li&gt;
&lt;li&gt;Stateless: Traffic to be explicitly allowed by the rules&lt;/li&gt;
&lt;li&gt;Process rules in order when deciding to allow traffic&lt;/li&gt;
&lt;li&gt;Automatically applies to all instances in the subnets and doesnt rely on users creating SGs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1372994550252195845"&gt;VPC Flow logs&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS VPC Flow logs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capture information about IP traffic into your interfaces (VPC, Subnet Flow Logs, ENI Flow Logs)&lt;/li&gt;
&lt;li&gt;Helps to monitor and troubleshoot connectivity issues like Subnets to the internet, Subnets to other Subnets and Internet to Subnets&lt;/li&gt;
&lt;li&gt;Captures network information from AWS managed interfaces too. (ELB, ElastiCache, RDS etc)&lt;/li&gt;
&lt;li&gt;VPC Flow logs data can go to S3 or CloudWatch logs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1372998323389407234"&gt;VPC Peering&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS VPC&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is a connection for allowing 2 VPC being part of the same network&lt;/li&gt;
&lt;li&gt;They are privately connection and they behave like they are on the same network.&lt;/li&gt;
&lt;li&gt;They must not have overlapping IP address ranges (CIDR)&lt;/li&gt;
&lt;li&gt;VPC peering connections are not transitive

&lt;ul&gt;
&lt;li&gt;VPC1 is connected to VPC2&lt;/li&gt;
&lt;li&gt;VPC2 is connected to VPC3&lt;/li&gt;
&lt;li&gt;VPC1 is NOT connected to VPC3&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1373258287773216770"&gt;VPC Endpoints&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS VPC Endpoints&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They allow you to connect to AWS services using a private network instead of a public one&lt;/li&gt;
&lt;li&gt;That results in enhanced security and lower latency to access AWS services&lt;/li&gt;
&lt;li&gt;VPC Endpoints gateways are popular for S3 and DynamoDB&lt;/li&gt;
&lt;li&gt;For all the rest of the AWS services we have VPC Endpoint Interfaces&lt;/li&gt;
&lt;li&gt;VPC Endpoints are only used within your VPC&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1373338312061030400"&gt;Site to Site VPN and Direct Connect&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To close the VPC section, let's talk about AWS Site to Site VPN and Direct Connect&lt;/p&gt;

&lt;p&gt;Site to Site VPN is for connecting at on-premises VPN to AWS with an automatically encrypted public connection&lt;/p&gt;

&lt;p&gt;Direct Connect (DX) is a physical connection between on premises and AWS. That connection is private, secure and fast. However, it takes at least a month to establish.&lt;/p&gt;

&lt;p&gt;Note. Neither of them can access VPC endpoints&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Route53 was a lot more interesting than originally anticipates. It had some great examples of ways we can split traffic which I thought would normally be handled by the load balancer. Then diving deeper into VPCs was quite interesting because they are great for sharpening my networking skills which are not my strongest card.&lt;/p&gt;

&lt;p&gt;I would love to hear from you about the format and any suggestions on how to make this challenge more beneficial for the people out there that follow my updates. Next week is all about S3!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>route53</category>
      <category>vpc</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 3, EBS, EFS, RDS and ElastiCache</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 14 Mar 2021 21:34:48 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-3-ebs-efs-rds-and-elasticache-75b</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-3-ebs-efs-rds-and-elasticache-75b</guid>
      <description>&lt;p&gt;The learning AWS journey continues. This week was all about storage. Because there were too many things to share, my timeline was quite busy with multiple tweets per day.&lt;/p&gt;

&lt;p&gt;At the beginning I started with EBS (Elastic Block Store) and EFS (Elastic File System) which are about HDD type of storage for EC2. That was interesting because I learned about a few hardware terms that I was not aware of.&lt;/p&gt;

&lt;p&gt;The week continued with RDS which I initially thought of skipping as "it was the AWS service i'm the most familiar with" but thankfully ended up doing it and learned a lot about how to scale Databases in AWS. Especially the part about Aurora DB was really exciting.&lt;/p&gt;

&lt;p&gt;The week ended with ElastiCache which also revealed some great techniques for managing cache. Let's go straight to the Tweets.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1368547494318067716"&gt;EBS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS EBS (Elastic Block Store)&lt;/p&gt;

&lt;p&gt;A volume in which you store your EC2 data so that they don't get lost when the instance is terminated&lt;/p&gt;

&lt;p&gt;An EBS is network drive which works like a USB stick on the cloud.&lt;/p&gt;

&lt;p&gt;⏳ It has a bit of latency&lt;/p&gt;

&lt;p&gt;🔌 It can quickly be attached to other instances&lt;/p&gt;

&lt;p&gt;🔒 It is locked to an AZ. To move it across you need to snapshot it&lt;/p&gt;

&lt;p&gt;💰 You are billed for the provisioned capacity, not the one you use&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1368596065763528707"&gt;EBS Types&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are 4 types of AWS EBS (Elastic Block Store)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GP2: A general purpose SSD that balances performances and price&lt;/li&gt;
&lt;li&gt;IO1: A high performance SSD for mission critical low latency / high throughput work (good for large DBs) Only these 2 can be boot volumes.&lt;/li&gt;
&lt;li&gt;ST1: Low cost HDD for frequently accessed and throughput intense workloads&lt;/li&gt;
&lt;li&gt;SCI: Lowest cost HDD for less frequently access workloads.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All 4 are characterised in size, throughput and IOPS (I/O Ops Per Second)&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1368658226510655488"&gt;Instance Store&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Finally for AWS EBS we have the instance store.&lt;/p&gt;

&lt;p&gt;It is a physical HDD attached to the instance.&lt;/p&gt;

&lt;p&gt;➕ I/O performance, good for cache / temp content, survives reboots&lt;/p&gt;

&lt;p&gt;➖ Lost on stop / termination, can't resize and require manual backups&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1368909634174513154"&gt;EFS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS EFS (Elastic File System)&lt;/p&gt;

&lt;p&gt;A managed NFS (Network File System) that can be mounted on multiple EC2 instances&lt;/p&gt;

&lt;p&gt;It is multi AZ, highly available and scalable&lt;br&gt;
It is pay per use and is good for web serving, data sharing and Wordpress.&lt;/p&gt;

&lt;p&gt;Unlike EBS, it has no capacity planning and uses a pay-per-use pricing model which scales automatically.&lt;/p&gt;

&lt;p&gt;📈 Can have thousands of concurrent NFS clients and can grow to Petabytes scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1369008788016001028"&gt;EBS vs EFS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's compare EBS and EFS&lt;/p&gt;

&lt;p&gt;For EBS&lt;/p&gt;

&lt;p&gt;1 instance at a time locked in 1 AZ.&lt;/p&gt;

&lt;p&gt;To migrate we need to take snapshots and restore them to another AZ.&lt;/p&gt;

&lt;p&gt;When running backups they use a lot of IO and shouldn't be run when app handles lot of traffic.&lt;/p&gt;

&lt;p&gt;Root volumes are lost upon termination&lt;/p&gt;

&lt;p&gt;For EFS&lt;/p&gt;

&lt;p&gt;We can mount hundreds of of instances across multiple AZs&lt;/p&gt;

&lt;p&gt;We can use them to share website files but is Linux only.&lt;/p&gt;

&lt;p&gt;Is ~ 3 times more expensive than EBS. For cost savings can use EFS-IA (cost to retrieve files, lower to store)&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1369274033720619009"&gt;RDS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS RDS (Relation Database Service)&lt;/p&gt;

&lt;p&gt;An AWS managed SQL only database which can be one of: Postgres, MySQL, MariaDB, Oracle,  MSSQL and Aurora&lt;/p&gt;

&lt;p&gt;As it is a managed service, we cannot SSH into our machine but only connect to the remote DB&lt;/p&gt;

&lt;p&gt;Why RDS and not deploying a DB on EC2?&lt;/p&gt;

&lt;p&gt;Automated provisioning, OS patching. and maintenance windows for upgrades&lt;/p&gt;

&lt;p&gt;Continuous backups and can restore to specific timestamps&lt;/p&gt;

&lt;p&gt;Option for multi AZ setup for Disaster recovery&lt;/p&gt;

&lt;p&gt;Horizontal / vertical scaling&lt;/p&gt;

&lt;p&gt;EBS storage (GP2 or IO1)&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1369368406940614668"&gt;RDS Backups&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about backups in AWS RDS&lt;/p&gt;

&lt;p&gt;They are automatically enabled&lt;/p&gt;

&lt;p&gt;Daily during the maintenance window&lt;/p&gt;

&lt;p&gt;Transaction logs every 5 minutes with ability to &lt;br&gt;
restore to any point in time up to 5 minutes ago&lt;/p&gt;

&lt;p&gt;7 days retention of backups (up to 35)&lt;/p&gt;

&lt;p&gt;Now let's talk about snapshots.&lt;/p&gt;

&lt;p&gt;The difference between a backup and a snapshot is that a snapshot is manually triggered by the user unlike the backups which are automatic.&lt;/p&gt;

&lt;p&gt;Because they are manual, the retention period is as long as the user wants.&lt;/p&gt;

&lt;p&gt;For IAM auth, we don't need a password, just authenticate with IAM authentication tokens.&lt;/p&gt;

&lt;p&gt;That IAM authentication token is short lived and has lifetime of 15 minutes&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1369633904995438602"&gt;RDS Read Replicas&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What are AWS RDS Read Replicas?&lt;/p&gt;

&lt;p&gt;They are additional servers which have copies of our DB and are used for read scalability.&lt;/p&gt;

&lt;p&gt;RDS supports up to 5 read replicas that can be within AZ, across AZ and even cross region.&lt;/p&gt;

&lt;p&gt;Replication is ASYNC so reads are eventually consistent.&lt;/p&gt;

&lt;p&gt;Replicas can be promoted to their own DB.&lt;/p&gt;

&lt;p&gt;Each replica has its own connection string which an app needs to have to connect to.&lt;/p&gt;

&lt;p&gt;A use case of using read replicas.&lt;/p&gt;

&lt;p&gt;Your prod DB is taking on normal load and you want to run a reporting app to run some analytics.&lt;/p&gt;

&lt;p&gt;You can create a read replica to run the new workload there and the prod DB is not affected.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1369740105817022464"&gt;RDS Disaster Recovery&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now let's talk about RDS multi AZ DR (Disaster Recovery)&lt;/p&gt;

&lt;p&gt;It is a SYNC replication which is when a change happens on main, also needs to happen on the secondary replicas.&lt;/p&gt;

&lt;p&gt;There's 1 DNS name and supports automatic app failover to secondary replicas to increase availability.&lt;/p&gt;

&lt;p&gt;The failover might happen in case of loss of AZ, loss of network, or instance / storage failure.&lt;/p&gt;

&lt;p&gt;No manual intervention in apps and used for scaling.&lt;/p&gt;

&lt;p&gt;Can set read replicas as multi AZ to ensure even higher availability.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1369997299259113476"&gt;RDS Encryption and Security&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS RDS encryption and security.&lt;/p&gt;

&lt;p&gt;There are 2 types of encryption.&lt;/p&gt;

&lt;p&gt;🛏 At rest encryption (data not in movement)&lt;/p&gt;

&lt;p&gt;🛩 In flight encryption &lt;/p&gt;

&lt;p&gt;For rest encryption, we can encrypt master &amp;amp; read replicas and is defined at launch time. Master not encrypted = replicas not encrypted&lt;/p&gt;

&lt;p&gt;For in-flight encryption, we use SSL certificates to encrypt data to RDS in flight. SSL options will trust certificates when connecting to DB&lt;/p&gt;

&lt;p&gt;To encrypt and un-encrypted RDS DB&lt;/p&gt;

&lt;p&gt;✅ We create a snapshot&lt;/p&gt;

&lt;p&gt;✅ We copy is and enable encryption&lt;/p&gt;

&lt;p&gt;✅ We restore the DB from the encrypted snapshot&lt;/p&gt;

&lt;p&gt;✅ Then migrate to new DB and delete old one&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1370092676779941889"&gt;More about RDS security&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;More about AWS RDS security 🔐 &lt;/p&gt;

&lt;p&gt;RDS clusters are deployed within private subnets and not public ones.&lt;/p&gt;

&lt;p&gt;We can add Security Group rules which uses the same inbound / outbound logic as EC2.&lt;/p&gt;

&lt;p&gt;We can also setup IAM policies about who can manage RDS.&lt;/p&gt;

&lt;p&gt;To login to the Database we use the traditional username and password way.&lt;/p&gt;

&lt;p&gt;For MySQL and Postgres in RDS we can also use IAM based authentication&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1370357675616202761"&gt;Aurora DB&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Today we're going to talk about AWS Aurora DB.&lt;/p&gt;

&lt;p&gt;🥷 Not open sourced&lt;/p&gt;

&lt;p&gt;✅ Supports MySQL and Postgres&lt;/p&gt;

&lt;p&gt;💪 Cloud optimised with 5x performance over classic RDS&lt;/p&gt;

&lt;p&gt;🏔 Automatically grows up to 64TB (starts with 10GB)&lt;/p&gt;

&lt;p&gt;📈 Can have up to 15 replicas with replication process of 10ms&lt;/p&gt;

&lt;p&gt;🌋 Instantaneous failover being high availability native&lt;/p&gt;

&lt;p&gt;💸 20% more expensive than classic RDS&lt;/p&gt;

&lt;p&gt;Aurora provides Serverless plans in certain regions which is great for automated DB instantiation and auto-scaling&lt;/p&gt;

&lt;p&gt;It's great for infrequent or unpredictable workloads&lt;/p&gt;

&lt;p&gt;Requires no capacity planing and you pay per second which makes it super cost effective&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1370426880818434053"&gt;Aurora Global&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS Aurora Global&lt;/p&gt;

&lt;p&gt;Cross Region read replicas that are easy to setup and useful for disaster recovery&lt;/p&gt;

&lt;p&gt;The Aurora Global database gives 1 primary region for reads and writes and can have up to 5 secondary read regions with 16 replicas (80 in total)&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1370767624322248707"&gt;ElastiCache&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Today let's talk about AWS ElastiCache&lt;/p&gt;

&lt;p&gt;It is a managed Redis or Memcached service&lt;/p&gt;

&lt;p&gt;Caching is a great way to help reduce load off our DB and make our app stateless&lt;/p&gt;

&lt;p&gt;Elastic cache has write scaling using sharing and read scaling using Read Replicas&lt;/p&gt;

&lt;p&gt;ElastiCache Redis&lt;/p&gt;

&lt;p&gt;It is multi AZ with auto failover&lt;/p&gt;

&lt;p&gt;You can enhance the reads with read replicas for high availability&lt;/p&gt;

&lt;p&gt;Even if your cache restarts, you still have access to your data using AOF (Append Only File) persistence&lt;/p&gt;

&lt;p&gt;Then you can backup and restore features&lt;/p&gt;

&lt;p&gt;ElastiCache Memcached&lt;/p&gt;

&lt;p&gt;Uses multiple nodes for partitioning data (sharding)&lt;/p&gt;

&lt;p&gt;Unlike Redis, if the cache restarts, all of the data has been lost&lt;/p&gt;

&lt;p&gt;Also there is no option to backup and restore your data&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1370811922631643137"&gt;ElastiCache implementation methods&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Lazy loading 🥱 (or cache-aside or lazy population)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When reading from the cache&lt;/p&gt;

&lt;p&gt;If data found in the cache, return them (cache hit)&lt;/p&gt;

&lt;p&gt;If they aren't there (cache miss), read from the DB, write them into the cache and then return them&lt;/p&gt;

&lt;p&gt;➕&lt;br&gt;
we only cache data that is used&lt;br&gt;
if there's a failure, it's not fatal as we still have the DB&lt;/p&gt;

&lt;p&gt;➖&lt;br&gt;
In case of cache miss, we do 3 round journeys to the DB and cache&lt;br&gt;
We might cache data that will not be used again and need to also implement an invalidation strategy&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write through ✍🏼&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's when we write to the DB and also to the cache&lt;/p&gt;

&lt;p&gt;➕&lt;br&gt;
Data in cache is always up to date&lt;/p&gt;

&lt;p&gt;Reads are always fast&lt;/p&gt;

&lt;p&gt;No one expects writes to be ultra fast&lt;/p&gt;

&lt;p&gt;Makes sense from a UX point of view&lt;/p&gt;

&lt;p&gt;➖&lt;br&gt;
In case a page needs to read before a write takes place, there will be no data in the cache&lt;/p&gt;

&lt;p&gt;In many cases we rely lazy loading to be there as well&lt;/p&gt;

&lt;p&gt;We add too much into the cache and it is very likely that a lot of data will never be read&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cache evictions and TTL (Time to Live) ⌛&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When we explicitly define that cache will be available for x seconds and that it will automatically be deleted&lt;/p&gt;

&lt;p&gt;The items that were Least Recently Used (LRU) can be evicted&lt;/p&gt;

&lt;p&gt;TTL are good for leaderboards, comments on social media and activity streams&lt;/p&gt;

&lt;p&gt;They can last from seconds, hours or even days&lt;/p&gt;

&lt;p&gt;If there are too many evictions due to memory, then we need to scale either vertically or horizontally&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Uuu boy, that was a lot this week. I really enjoyed it though, especially the RDS and ElastiCache part. It is really good to see that AWS training not only educates you on their services but on architectural / scaling concepts as well. These concepts are pretty much applicable to any cloud service.&lt;/p&gt;

&lt;p&gt;Next week we're looking at Route53, some more stuff about VPCs and S3. &lt;/p&gt;

</description>
      <category>ebs</category>
      <category>efs</category>
      <category>rds</category>
      <category>elasticache</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 2, Load Balancers and Auto Scaling</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 07 Mar 2021 16:41:35 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-2-load-balancers-and-auto-scaling-1kco</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-2-load-balancers-and-auto-scaling-1kco</guid>
      <description>&lt;p&gt;Week 2 of my AWS challenge and things are going great! Learning how ELBs (Elastic Load Balancer) and ASGs (Auto Scaling Group) work connects many dots on the wider topic of understanding the cloud infrastructure.&lt;/p&gt;

&lt;p&gt;The deeper I dive into all these concepts, the more interesting things get and it makes a lot more sense on many aspects of full stack development. Here's a breakdown of what happened this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1366012293272670213"&gt;28/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS ELB (Elastic Load Balancer)?&lt;/p&gt;

&lt;p&gt;A server that evenly distributes internet traffic to multiple EC2 instances.&lt;/p&gt;

&lt;p&gt;Load balancing is a good way to ensure high availability to your system in case one of the instances goes down.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1366373185386553346"&gt;01/03&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;AWS provides 3 kinds of load balancers&lt;/p&gt;

&lt;p&gt;✅ CLB (Classic Load Balancer)&lt;/p&gt;

&lt;p&gt;✅ ALB (Application Load Balancer)&lt;/p&gt;

&lt;p&gt;✅ NLB (Network Load Balancer)&lt;/p&gt;

&lt;p&gt;CLB is old generation and supports HTTP, HTTPS and TCP&lt;/p&gt;

&lt;p&gt;ALB is the most common amongst modern applications and supports HTTP, HTTPS and Websockets&lt;/p&gt;

&lt;p&gt;NLB is mainly used for high performance job and supports TCP, TLS and UDP&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1366737569006817286"&gt;02/03&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is Load Balancer Stickiness?&lt;/p&gt;

&lt;p&gt;A cookie to instruct our load balancer to redirect each independent user to the same instance.&lt;/p&gt;

&lt;p&gt;➕ works for CLB and ALB&lt;/p&gt;

&lt;p&gt;➕ can control the expiration date&lt;/p&gt;

&lt;p&gt;➖ can bring imbalance to the load balancer&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1367097946307633152"&gt;03/03&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is SSL (Secure Socket Layer) and TLS (Transport Layer Security) in AWS?&lt;/p&gt;

&lt;p&gt;Both of them are certificates which allow traffic between your clients and your load balancer to be encrypted in transit (also referred to as in flight encryption)&lt;/p&gt;

&lt;p&gt;Good to knows.&lt;/p&gt;

&lt;p&gt;TLS is a new version of SSL.&lt;/p&gt;

&lt;p&gt;People quite often use TLS but refer to them as SSL.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1367460083089641474"&gt;04/03&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is Connection Draining in AWS Load Balancers?&lt;/p&gt;

&lt;p&gt;The time to complete "in-flight requests" while the instance is de-registering or unhealthy.&lt;/p&gt;

&lt;p&gt;Once the instance is de-registering, it stops receiving requests.&lt;/p&gt;

&lt;p&gt;This process can take 1-3600 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1367844869331247106"&gt;05/03&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS ASG (Auto Scaling Group)&lt;/p&gt;

&lt;p&gt;A setting for managing changes to traffic with a goal to:&lt;/p&gt;

&lt;p&gt;📈 scale out (add EC2 instances) to match increased load&lt;/p&gt;

&lt;p&gt;📉 scale in (remove EC2 instances) to match decreased load&lt;/p&gt;

&lt;p&gt;What does ASG look in AWS? We have&lt;/p&gt;

&lt;p&gt;✅ Minimum size which is the lowest amount of instances required for our system to be functional.&lt;/p&gt;

&lt;p&gt;✅ Actual size / desired capacity: the setup normally we have&lt;/p&gt;

&lt;p&gt;✅Maximum size - the amount of instances we can allocate to handle the extra load&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1368184857793163266"&gt;06/03&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's talk about AWS ASG policies. We have 3 kinds 💈&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Target tracking scaling&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is quite simple and the easiest one to setup.&lt;/p&gt;

&lt;p&gt;It is when you set rules of e.g. the CPU usage to stay at around 40%&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Simple or step scaling&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;➕ When a CloudWatch alarm is triggered (e.g. CPU &amp;gt; 70%) then add 2 units&lt;/p&gt;

&lt;p&gt;➖ When a CloudWatch alarm is triggered (e.g. CPU &amp;lt; 30%) then remove 1 unit&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scheduled Actions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is based on know usage patterns like on a busy weekend or on BlackFriday&lt;/p&gt;

&lt;p&gt;Finally let's talk about AWS ASG scaling cooldowns&lt;/p&gt;

&lt;p&gt;A cooldown is a period to ensure your ASG doesn't add or remove instances before the previous scaling activity takes place.&lt;/p&gt;

&lt;p&gt;The default cooldown period is 300 seconds. Can be reduced to 180.&lt;/p&gt;

&lt;p&gt;Example: A policy that terminates instances based on criteria or metric, EC2 Auto Scaling needs less time to determine wether to terminate additional instances&lt;/p&gt;

&lt;p&gt;If your application is scaling up or down multiple times each hour, then that's a good indicator that you need to modify your ASG cooldown timers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 2 Summary
&lt;/h2&gt;

&lt;p&gt;Learning AWS is quite interesting because it feels like even the small bits help a lot. I find it quite easy to wake up in the morning because early AM before work now means AWS time.&lt;/p&gt;

&lt;p&gt;Understanding load balancing answers many questions on how big sites handle their traffic. Some things I have noticed is that getting this concept of how things work in the cloud changes a lot the way I started thinking about development. More of that in the next blog posts to come.&lt;/p&gt;

&lt;p&gt;If you want to follow my journey, feel free to &lt;a href="https://twitter.com/harrisgeo88"&gt;follow me on Twitter&lt;/a&gt; and please reach out :). Next week I will be looking at AWS EBS (Elastic Block Store), EFS (Elastic File System) and RDS (Relational Database System).&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>100daysofaws</category>
    </item>
    <item>
      <title>Setup Your AWS Free Tier Alerts and avoid any surprise charges</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Thu, 04 Mar 2021 07:47:03 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/setup-your-aws-free-tier-alerts-and-avoid-any-surprise-charges-4e2a</link>
      <guid>https://dev.to/harrisgeo88/setup-your-aws-free-tier-alerts-and-avoid-any-surprise-charges-4e2a</guid>
      <description>&lt;p&gt;This is the second AWS blog as part of my &lt;em&gt;100DaysOfAWS&lt;/em&gt; challenge. Let's see how far I can reach.&lt;/p&gt;

&lt;p&gt;If you're like me and you want to explore the world of AWS but are sceptical about not ending up with a gigantic bill then that blog post is for you.&lt;/p&gt;

&lt;p&gt;Let's login with our root account and click on your username and then &lt;code&gt;my billing dashboard&lt;/code&gt;. Warning, this only works with the root account. If you have another account that is given same permissions as the root one it will not work.&lt;/p&gt;

&lt;p&gt;Under preferences, select &lt;code&gt;billing preferences&lt;/code&gt;.  In case you leave in a country other than the US you might want to be charged not in USD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Billing preferences
&lt;/h2&gt;

&lt;p&gt;In that page you will see a few options on how to receive alerts. For now let's check the &lt;code&gt;receive free tier usage alerts&lt;/code&gt; as well as &lt;code&gt;receive billing alerts&lt;/code&gt;. Under the latter option, there is a link the &lt;code&gt;manage billing alerts&lt;/code&gt; which will lead us to CloudWatch to do a bit further setup let's click on that. This is where we can also change our billing currency. You can also search for the &lt;code&gt;CloudWatch&lt;/code&gt; service if you cannot see this option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FG1ceA3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/630705/109870141-a2353500-7c61-11eb-9d3d-b024de8a8263.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FG1ceA3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/630705/109870141-a2353500-7c61-11eb-9d3d-b024de8a8263.png" alt="https://user-images.githubusercontent.com/630705/109870141-a2353500-7c61-11eb-9d3d-b024de8a8263.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the main CloudWatch page, on the left we can see a section for alarms. Let's click on the &lt;code&gt;billing&lt;/code&gt; option of it. Here we can create billing alarms to ensure that we're not going above our free tier limit. This tier gives us &lt;strong&gt;10 free alarms&lt;/strong&gt; and &lt;strong&gt;1000 free email notifications&lt;/strong&gt; each month. Now let's click on the &lt;code&gt;create alarm&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;Here there are some steps we need to go through. In the first one for metric and conditions, we can see that there is an option for currency. Let's change that to &lt;code&gt;GBP&lt;/code&gt; because we live in the UK. If we scroll down there is an option to define the &lt;code&gt;threshold value&lt;/code&gt;. Let's put &lt;em&gt;5 GBP&lt;/em&gt; there. That means that if our bill goes above £5 per month, we will get notified. Modify that amount to something you are comfortable with. Let's click next.&lt;/p&gt;

&lt;p&gt;Now let's create an SNS topic so that we are subscribed to that alert. In the SNS topic section, click on &lt;code&gt;create new topic&lt;/code&gt;. Now let's enter a name for that topic. That name has to be unique so the more descriptive the better. I named mine &lt;em&gt;Free_tier_exceeded&lt;/em&gt;. Then we can add the email we want this subscription to notify. Click on create topic.&lt;/p&gt;

&lt;p&gt;There is a link to view that notification in the SNS (Simple Notification Service) console, let's click there. It opens the SNS console in a new tab. Under subscriptions we see the one we just added which has status &lt;em&gt;pending confirmation&lt;/em&gt;. If we open our email we put earlier, there is a new email to confirm the subscription. Before we confirm anything let's first finish creating the alarm.&lt;/p&gt;

&lt;p&gt;Back at CloudWatch tab, for now let's ignore the rest of the options and go straight to the next step. Now let's give our alarm a name and description. I put &lt;em&gt;Free_tier_alert&lt;/em&gt; and &lt;em&gt;Email me when my bill goes above £5&lt;/em&gt;. Let's click next.&lt;/p&gt;

&lt;p&gt;Now we can see a summary of everything we just put along with a Graph for estimated charged. Finally let's click on create alarm. The alarms page in CloudWatch should look like that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lA4kdgzB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/630705/109870194-b24d1480-7c61-11eb-9f03-95ddc2515c2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lA4kdgzB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/630705/109870194-b24d1480-7c61-11eb-9f03-95ddc2515c2b.png" alt="https://user-images.githubusercontent.com/630705/109870194-b24d1480-7c61-11eb-9f03-95ddc2515c2b.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it's time to approve the subscription email we received earlier. Clicking on the refresh button of the billing alarms page, now shows the actions set to &lt;em&gt;1 action(s)&lt;/em&gt;. The state however is still showing us &lt;em&gt;insufficient data&lt;/em&gt; which is ok as we do not have any charges yet.&lt;/p&gt;

&lt;p&gt;Now we can go and start playing around with AWS services and in case we forget to remove something that charges us over time, we will be notified. Happy hacking.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>freetier</category>
      <category>100daysofaws</category>
    </item>
    <item>
      <title>AWS Learn In Public Week 1, the EC2 basics</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 28 Feb 2021 16:36:58 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/aws-learn-in-public-week-1-the-ec2-basics-a9</link>
      <guid>https://dev.to/harrisgeo88/aws-learn-in-public-week-1-the-ec2-basics-a9</guid>
      <description>&lt;p&gt;Hello there,&lt;/p&gt;

&lt;p&gt;Last weekend due to a gaffe at work, I decided to invest and improve my AWS skills. It has been a desire of mine for a few months now to go for the &lt;a href="https://aws.amazon.com/certification/certified-developer-associate/"&gt;AWS developer associate certification&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I really like the bite size blogging experience in Twitter and felt like that this method could motivate me to do it consistently and also others can learn along with me. I "signed up" to the 100 days of AWS challenge and also to learn in public hashtags.&lt;/p&gt;

&lt;p&gt;I got some really positive feedback when I made &lt;a href="https://twitter.com/harrisgeo88/status/1363201771229892610"&gt;this announcement&lt;/a&gt; on Twitter. My goal was to post at least one fact about what I was studying that day every single day.&lt;/p&gt;

&lt;p&gt;My learning usually happens early in the morning before work and my tweet has been at around the time I take my lunch break. Let's see how did the first week go. I started with the AWS EC2 basics.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1363473813636329475"&gt;21/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS Virtual Private Cloud (VPC)?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Amazon VPC enables you to launch AWS resources into a virtual network that you've defined.&lt;/li&gt;
&lt;li&gt;Think of it as a traditional network within your data centre that uses AWS infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1363836453822763012"&gt;22/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS Availability Zone (AZ)?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AZs are AWS data centres that can be found within AWS regions. Each region has multiple AZs.&lt;/li&gt;
&lt;li&gt;As a best practice to ensure high availability for your system, a VPC can span multiple AZs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1364200098704494599"&gt;23/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS subnet?&lt;/p&gt;

&lt;p&gt;A range of IP addresses in your VPC.&lt;/p&gt;

&lt;p&gt;✅ you can launch AWS resources into a subnet that you select&lt;/p&gt;

&lt;p&gt;❌ you cannot launch any instances without subnets&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A subnet is always mapped to a single AZ&lt;/li&gt;
&lt;li&gt;As a best practice subnets should be spread amongst AZs for redundancy and failover purposes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1364561732924973056"&gt;24/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Types of AWS subnets&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public subnets for things that are connected to the internet. e.g. web servers&lt;/li&gt;
&lt;li&gt;Private subnets for things that are not connected to the internet e.g. Databases&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1364923867341328384"&gt;25/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Time for Networking in AWS.&lt;br&gt;
What is an AWS internet gateway?&lt;br&gt;
A horizontally scaled, redundant and highly available VPC component that allows communication between instances in our VPC and the internet.&lt;/p&gt;

&lt;p&gt;❗ Each VPC can only have 1 internet gateway&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1365287768142667776"&gt;26/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;What is an AWS AMI (Amazon Machine Image)?&lt;br&gt;
The software and operating system that will be used in our system when launching EC2 instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://twitter.com/harrisgeo88/status/1365647636213035010"&gt;27/02&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;AWS Tip 💡&lt;/p&gt;

&lt;p&gt;Every time you get timeout errors it's very likely to be related to your security group settings.&lt;br&gt;
Start by investigating your inbound / outbound rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 1 Summary
&lt;/h2&gt;

&lt;p&gt;The experience has totally been positive! Still trying to figure out what is the best way to share that information but come on, the purpose of such challenge apart from learning a new skill, is to see how much you can improve during time.&lt;/p&gt;

&lt;p&gt;If you want to follow my journey, feel free to &lt;a href="https://twitter.com/harrisgeo88"&gt;follow me on Twitter&lt;/a&gt; and please reach out :). Next week I will be looking at AWS ELB (Elastic Load Balancer).&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learninpublic</category>
      <category>100daysofaws</category>
    </item>
    <item>
      <title>Docker in plain English part 1. Building and running Docker containers</title>
      <dc:creator>Harris Geo 👨🏻‍💻</dc:creator>
      <pubDate>Sun, 24 Jan 2021 13:55:43 +0000</pubDate>
      <link>https://dev.to/harrisgeo88/docker-in-plain-english-part-1-building-and-running-docker-containers-560k</link>
      <guid>https://dev.to/harrisgeo88/docker-in-plain-english-part-1-building-and-running-docker-containers-560k</guid>
      <description>&lt;p&gt;Docker is a great tool and it is really useful for automating our workflows. I have been using Docker for many years, yet sometimes I find myself forgetting the basic commands.&lt;/p&gt;

&lt;p&gt;The scope of these series is to give some really simple examples of how to use Docker. In this article we will dockerise a static website. For those not familiar with the Docker vocabulary, dockerise is when we put some app into a Docker container. Before we start, let's download and install the docker CLI from the &lt;a href="https://www.docker.com/products/docker-desktop"&gt;official website&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The static website
&lt;/h2&gt;

&lt;p&gt;Here's a dead simple website that we are going to dockerise.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Simple website&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Hello there&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;
      I am a simple website and I live inside a Docker container.
    &lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's copy that code, open our terminal / code editor and paste it into an &lt;code&gt;index.html&lt;/code&gt; file. Now let's create a Dockerfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dockerfile
&lt;/h2&gt;

&lt;p&gt;We can now create a new file with the name &lt;code&gt;Dockerfile&lt;/code&gt; and add the following code into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; php:7.0-apache&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /var/www/html/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we're doing here is we're telling the container to install apache. Then the next step is to copy everything from the current directory and put it into &lt;code&gt;/var/www/html/&lt;/code&gt; which is the directory apache will look at for HTML (and not only) files. Now we have everything that we want in order to build our Docker image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Docker image
&lt;/h2&gt;

&lt;p&gt;We have configured what the Docker image is going to contain so let's build it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; static-website &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-t&lt;/code&gt; flag stands for tag and is a good practice that will simply make our lives so much easier when looking at long lists of Docker images.&lt;/p&gt;

&lt;p&gt;The first time we run that command, it is going to take a few seconds or minutes depending on our internet connections. That is because Docker has to download the image we specified in the Dockerfile.&lt;/p&gt;

&lt;p&gt;We can see all of the available images with the &lt;code&gt;docker images&lt;/code&gt; command. Our command line should look like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  docker images
REPOSITORY           TAG          IMAGE ID       CREATED              SIZE
static-website       latest       3152d04a164f   About a minute ago   368MB
php                  7.0-apache   aa67a9c9814f   2 years ago          368MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a Docker image for our website, we can spin up some Docker containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running our Docker container
&lt;/h2&gt;

&lt;p&gt;To spin up our Docker container we simply run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 4000:80 &lt;span class="nt"&gt;--name&lt;/span&gt; my-website static-website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wow there's a lot of things happening here. Let me quickly explain what's going on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-d&lt;/code&gt; is a flag that is telling Docker to detach that container from the process and run in the background. If we don't include that flag then killing the root process or in other words closing the terminal will also stop the container&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-p&lt;/code&gt; is a flag that stands for port. This where we tell docker what port to make that container available on in the outside world and then map it to the port used internally. In this case apache uses port &lt;strong&gt;80&lt;/strong&gt; and we want to make that available on the port &lt;strong&gt;4000&lt;/strong&gt; of our localhost&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--name&lt;/code&gt; is quite straightforward. It is the name we want to give to our Docker container&lt;/li&gt;
&lt;li&gt;the final argument &lt;code&gt;static-website&lt;/code&gt; refers to the Docker image we want to use to run the Docker container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may notice that once we run that command a long string is returned. That string is the id of the container and we can use it to stop it. However due to the amount of output that might be generated while running that command, we can also view all of the containers that are currently running with &lt;code&gt;docker ps&lt;/code&gt;. That should look like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜  docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED          STATUS          PORTS                            NAMES
9254225687cd   static-website    &lt;span class="s2"&gt;"docker-php-entrypoi…"&lt;/span&gt;   11 minutes ago   Up 11 minutes   3000/tcp, 0.0.0.0:8001-&amp;gt;80/tcp   my-website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if we open &lt;a href="http://localhost:4000"&gt;http://localhost:4000&lt;/a&gt; we should be able to see our static website which runs inside a Docker container we just created! 🎉&lt;/p&gt;

&lt;h2&gt;
  
  
  Stopping containers
&lt;/h2&gt;

&lt;p&gt;While experimenting with Docker we might have noticed that we have created multiple images and containers. Given the example from above, to stop the container we can do the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop 9254225687cd
&lt;span class="c"&gt;# or&lt;/span&gt;
docker stop my-website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where I find that naming containers comes in quite handy. When stopping a container, that does not mean that we have removed it. Running &lt;code&gt;docker start my-website&lt;/code&gt; should run it again.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: Remember that &lt;code&gt;docker ps&lt;/code&gt; will only show us the containers that are currently running. To see everything, including the ones that are stopped we can run &lt;code&gt;docker ps -a&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Cleaning up Docker containers and images
&lt;/h2&gt;

&lt;p&gt;All of the images and containers we have created can take a lot of space. Now let's talk about how to clean things up. Running &lt;code&gt;docker ps -a&lt;/code&gt;, we can view and then copy the ids of the containers we want to remove.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm &lt;/span&gt;ac6f4b61de14
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Tip: If you have more than once containers you want to remove you don't have to remove them one by one. You can pass all of them in the docker remove command like &lt;code&gt;docker rm id1 id2 ...&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are also more commands that can help with cleaning up. However, they can be subject to different opinions of whether you should or should not use them so let's not talk about them yet. For the Docker images the logic is quite similar.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker image &lt;span class="nb"&gt;rm &lt;/span&gt;edec9bdcc8d2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Aaaand that's it 🙌
&lt;/h2&gt;

&lt;p&gt;Congratulations we now know how to get started with our Docker journey. Next step I would recommend to do is to try and Dockerise our React or you Node.js apps. The getting-started page of the &lt;a href="https://docs.docker.com/get-started/02_our_app/"&gt;official docs&lt;/a&gt; have some great examples.&lt;/p&gt;

&lt;p&gt;In the next part we are going to talk about spinning up multiple containers using &lt;code&gt;docker-compose&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Please subscribe to my newsletter if you enjoyed this post and you would like to get notified when new ones come out.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Did you enjoy this content? &lt;a href="https://tinyletter.com/harrisgeo88"&gt;Subscribe to my newsletter&lt;/a&gt; and get notified when I post new stuff.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
