<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Filipe Motta</title>
    <description>The latest articles on DEV Community by Filipe Motta (@filipemotta).</description>
    <link>https://dev.to/filipemotta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/filipemotta"/>
    <language>en</language>
    <item>
      <title>An introduction and setting up kubernetes cluster on AWS using KOPS</title>
      <dc:creator>Filipe Motta</dc:creator>
      <pubDate>Tue, 20 Jul 2021 12:54:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/an-introduction-and-setting-up-kubernetes-cluster-on-aws-using-kops-50m</link>
      <guid>https://dev.to/aws-builders/an-introduction-and-setting-up-kubernetes-cluster-on-aws-using-kops-50m</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;In this post I’m going to introduce the KOPs - Kubernetes Operations - concepts, why shoud use it, their advantages and overview. Then, I will provision some kind of AWS infrastrucure using KOPs, scale up and down our infrastructure and as a result I will deploy a monitoring solution to test our infrastrucute using ingress.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s kOps ?
&lt;/h3&gt;

&lt;p&gt;Managing kubernetes cluster across multiple regions and clouds presents a handful of incredibly complex chanllenges and kOps is the easiest way to get a production grade Kubernetes cluster up and running. Kubernetes Operations helps us to create, destroy, upgrade and maintain production, highly available, Kubernetes cluster in cloud infrastructure. As kubernetes administrators, we know the importance to ensure that our Kubernetes clusters are upgraded to a version that is patched for the vulnerability and kops helps us to accomplish it. Another challenger is to provision a kubernetes cluster in cloud infrastructure, because we’ve deal with instances groups, route53, autoscaling groups, ELBs ( for the api server) , security groups, master bootstrapping, node bootstrapping, and rolling updates to your cluster and kops makes this work easier. As a result, managing a Kubernetes cluster on AWS without any tooling is a complicated process and I do not recommend it.&lt;/p&gt;

&lt;p&gt;Kops is an open source tool and it is completely free to use, but you are responsible for paying for and maintaining the underlying infrastructure created by kops to manage your Kubernetes cluster. According offical site, now AWS (Amazon Web Services) is currently officially supported, with DigitalOcean, GCE, and OpenStack in beta support, and Azure and AliCloud in alpha.&lt;/p&gt;

&lt;h3&gt;
  
  
  kOps Advantages
&lt;/h3&gt;

&lt;p&gt;Specifically in AWS, Why use kops instead of other provider cloud solutions like EKS?&lt;/p&gt;

&lt;p&gt;One of the major advantages is that kops will create our cluster as EC2 instances and you are able to access the nodes directly and make some customs modifications, as result you can choose which networking layer to use, choose the size of master instances, and directly monitor the master and work nodes as well scale up and down your infrastructure only edit a file. You also have an option of setting up a cluster provisioning of Highly Available Kubernetes clusters or only a single master, which might be desirable for dev and test environments where high availability is not a requirement.&lt;/p&gt;

&lt;p&gt;KOps also supports built on a state-sync model for dry-runs and automatic idempotency brings a powerfull model to version control your cluster setup and gives possibilities to use GitOps as pull model instead of push model using the best practices. If you would like, Kops also supports generating terraform config for your resources instead of directly creating them, which is a nice feature if you use terraform.&lt;/p&gt;

&lt;p&gt;According official site, kOps has the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Automates the provisioning of Highly Available Kubernetes clusters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Built on a state-sync model for dry-runs and automatic idempotency&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ability to generate Terraform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports zero-config managed kubernetes add-ons&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Command line autocompletion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;YAML Manifest Based API Configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Templating and dry-run modes for creating Manifests&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose from most popular CNI Networking providers out-of-the-box&lt;/p&gt;

&lt;p&gt;Multi-architecture ready with ARM64 support&lt;/p&gt;

&lt;p&gt;Capability to add containers, as hooks, and files to nodes via a cluster manifest&lt;/p&gt;

&lt;h3&gt;
  
  
  kOps Overview
&lt;/h3&gt;

&lt;p&gt;In this section, I will provide you only the main commands that I consider important to provision and maintain a kubernetes cluster with kOps. If you want to go beyond, you can look it up on the official website.&lt;/p&gt;

&lt;p&gt;kops create&lt;br&gt;
kops create  creates a resource like a cluster, instancegroup or a secret using command line parameters, YAML configuration specification files, or stdin.&lt;/p&gt;

&lt;p&gt;For example, there are two ways of registering a cluster: using a cluster spec file or using cli arguments.&lt;/p&gt;

&lt;p&gt;If you would like to create a cluster in AWS with High Availability masters you can use these parameters:&lt;/p&gt;

&lt;p&gt;Or you can save your configuration in a file and apply later so that is good idea to keep it in a version control. You can use --dry-run -o yaml like kubernetes in place of --yes parameter.&lt;/p&gt;

&lt;p&gt;After in production, you can add a node if is needed… For instance, for add a single node with a role node in a cluster you can use the follow command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops create ig --name=k8s-cluster.example.com node-example \
  --role node --subnet my-subnet-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As mentioned, you can first register the infrastructure, after that you use the --yes with kops update cluster --yes to effectively create the resource.&lt;/p&gt;

&lt;p&gt;kops edit&lt;br&gt;
As we saw, kops create cluster  creates a cloud specification in the registry using cli arguments. In most cases, you will need to edit the cluster spec using kops edit before actually creating the cloud resources. As mentioned, once confirmed, you can add the --yes flag to immediately create the cluster including cloud resource. Even the resources are running, you can use kops edit any time and after apply.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops edit cluster k8s.cluster.site --state=s3://my-state-store
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops edit instancegroup --name k8s-cluster.example.com nodes --state=s3://my-state-store
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;S3 state&lt;br&gt;
We will see in more detais about the state in S3, but for now keep in mind that it will be used for store is the top-level config file. This file stores the main configuration for your cluster like instance types, zones, config…&lt;br&gt;
kops update cluster&lt;br&gt;
kops update cluster  creates or updates the cloud resources to match the cluster spec. If the cluster or cloud resources already exist this command may modify those resources. As a precaution, it is safer run in ‘preview’ mode first using kops update cluster --name , and once confirmed the output matches your expectations, you can apply the changes by adding --yes to the command - kops update cluster --name  --yes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops update cluster k8s-cluster.example.com --yes --state=s3://my-state-store --yes
kops get clusters
kops get clusters lists all clusters in the registry (state store) one or many resources such as cluster, instancegroups and secret.

kops get k8s-cluster.example.com -o yaml
Obviuoslly, you can get resource with or without yaml format.

kops get secrets admin -oplaintext
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kops delete cluster&lt;br&gt;
kops delete cluster deletes the cloud resources (instances, DNS entries, volumes, ELBs, VPCs etc) for a particular cluster. It also removes the cluster from the registry.&lt;/p&gt;

&lt;p&gt;As a precaution, it is safer run in ‘preview’ mode first using kops delete cluster --name , and once confirmed the output matches your expectations, you can perform the actual deletion by adding --yes to the command - kops delete cluster --name  --yes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops delete cluster --name=k8s.cluster.site --yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops delete instance ip-xx.xx.xx.xx.ec2.internal --yes   (delete an instance (node) from active cluster)
kops rolling-update cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some changes sometimes requires to perform a rolling update. Changes like add, delete or update a node or changes that requires major changes in cluster configuration. To perform a rolling update you need to update the cloud resources first with the command kops update cluster --yes. Nodes may be additionally marked for update by placing a kops.k8s.io/needs-update annotation on them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops rolling-update cluster (Preview a rolling update)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops rolling-update cluster --yes 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Nodes will be drained and the cluster will be validated between node replacement)&lt;br&gt;
These commands I consider important to begin deal with kOps. There are many others commands, resources, operations and addons to be overviewed, but for now let’s focus in practice.&lt;/p&gt;
&lt;h3&gt;
  
  
  Provisoning a AWS Infrastructure with kOps
&lt;/h3&gt;

&lt;p&gt;In this section I will show how to install kOps, install and configure AWS, configure route53 subdomain, configure, edit and delete a cluster and instances.&lt;/p&gt;
&lt;h4&gt;
  
  
  Install kOps
&lt;/h4&gt;

&lt;p&gt;KOps can be installed aside your kubectl to manage and operate your kubernetes cluster. Via Linux you can install it as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or install from source. If you would like to install from other OS you can get the information in official site .&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuring the AWS CLI
&lt;/h4&gt;

&lt;p&gt;To interact with AWS resoucers it is necessary to install AWSCLI and you can do it via pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install awscli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After install, you should run and configure with the follow command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws configure
AWS Access Key ID [None]: XXXXXXX
AWS Secret Access Key [None]:  XXXXXXX
Default region name [None]: us-west-2
Default output format [None]: json
Kops requires the following IAM permissions to work properly:

AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
Creating IAM group and user kops with the required permissions:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can list your users:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam list-users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have another AWS profile in your environment you can set or change the default profile before provision our infrastrucure with kOps.&lt;/p&gt;

&lt;p&gt;First, configure a new profile. In this case I called kops and setup my keys.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure --profile kops                                                                         
AWS Access Key ID [None]: XXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXX
Default region name [None]: us-west-2
Default output format [None]: json
After that we can confirm the new profile in: cat ~/.aws/config

[default]
region = us-east-2
[profile kops]
region = us-west-2
output = json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, to use the new profile we have two ways, the first is set the AWS_PROFILE variable with the name of the default profile. Another option is to set --profile option with aws command. I am going to use the first one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ export AWS_PROFILE=kops
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because “aws configure” doesn’t export these vars for kops to use, we export them now&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Route53
&lt;/h4&gt;

&lt;p&gt;In order to build a Kubernetes cluster with kops we need DNS records. There are some solutions that you can use here. In my case, I am going to use my own domain - filipemotta.me - that is hosted in AWS with route53. To use this scenario, we’ve to create a new subdomain zone - kubernetes.filipemotta.me - and then setting up route delegation to the new zone to kops creates the respectives records.&lt;/p&gt;

&lt;p&gt;To create a subdomain in route53 you need to follow the steps bellow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create a host zone with subdomain name ( in my case kubernetes.filipemotta.me )&lt;/li&gt;
&lt;li&gt;route53 will create for you NS records related with this subdomain&lt;/li&gt;
&lt;li&gt;create ns records in parent domain (filipemotta.me) using the subdomain name (kubernetes.filipemotta.me) to ns record that route53 created for you in the first step. That’s OK !
You can test with dig command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ dig +short NS kubernetes.filipemotta.me
ns-1293.awsdns-33.org.
ns-2009.awsdns-59.co.uk.
ns-325.awsdns-40.com.
ns-944.awsdns-54.net.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create S3 Bucket
&lt;/h4&gt;

&lt;p&gt;We’ll need to create a S3 bucket to kops stores the state of our cluster. This bucket will become the source of truth for our cluster configuration.&lt;/p&gt;

&lt;p&gt;It’s important to create the S3 Bucket with the name of your subdomain (or domain).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mb s3://clusters.kubernetes.filipemotta.me
S3 Versioning
It’s strongly versioning your S3 bucket in case you ever need to revert or recover a previous state store
aws s3api put-bucket-versioning --bucket 
kubernetes.filipemotta.me  --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before start to create our cluster, let’s set somes env variables. You can export KOPS_STATE_STORE=s3://clusters.kubernetes.filipemotta.me &lt;/p&gt;

&lt;p&gt;and then kops will use this location by default. We suggest putting this in your bash profile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KOPS_STATE_STORE=s3://clusters.kubernetes.filipemotta.me
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  PROVISIONING
&lt;/h4&gt;

&lt;p&gt;Before provision our cluster, let’s list which availability zones are available in the specific region.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-availability-zones --region us-west-2 --output text
AVAILABILITYZONES     us-west-2       available       usw2-az2        us-west-2a 
AVAILABILITYZONES     us-west-2       available       usw2-az1        us-west-2b 
AVAILABILITYZONES     us-west-2       available       usw2-az3        us-west-2c
AVAILABILITYZONES     us-west-2       available       usw2-az4        us-west-2d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this specific case, we can use the us-west-2a, us-west-2b, us-west-2c and us-west-2d.&lt;/p&gt;

&lt;p&gt;Finally, let’s create our first cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops create cluster --networking calico --node-count 3 --master-count 3 --zones us-west-2a,us-west-2b,us-west-2c --master-zones us-west-2a,us-west-2b,us-west-2c kubernetes.filipemotta.me
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A brief explanation is needed at this point.&lt;/p&gt;

&lt;p&gt;Once the KOPS_STATE_STORE=s3://clusters.kubernetes.filipemotta.me was set previously, it does not need to set in this command. The default networking of kOps is kubenet but is not recommended for production because it does not support network policies and other features, so we’ve to use on of these supported networks. In this case, I used calico.&lt;/p&gt;

&lt;p&gt;To prevent the master becoming unavailable, we provided the high availability kubernetes cluster with 3 masters nodes and 3 workers nodes. Each of these nodes - master and workers - are available in all availability zones (we can define a smaller amount). If you do not define a cluster with high availability you cannot interact with the API Server during any upgrade or failing node, so you can’t add nodes, scaling pods, or replace terminated pods.&lt;/p&gt;

&lt;p&gt;Therefore, when you define the node’s count it runs a dedicated ASG (autoscaling groups) and stores data on EBS volumes. We’ll see soon in the configuration file that we define the minimum and maximum number of the nodes. (the minimum is a quantity defined by params, –node-count 3 –master-count 3). Finally, we set the cluster name that should match with the subdomain name created previously.&lt;/p&gt;

&lt;p&gt;There is an important option yet, the topology. If it is not set, the topology is public (our case). If it is set, the typology is private (–topology private). So, What’s the difference between them?&lt;/p&gt;

&lt;p&gt;The public subnet is routed to an Internet gateway, the subnet is known as a public subnet. The private subnet will have public access via the Kubernetes API and an (optional) SSH bastion instance (–bastion=“true”).&lt;/p&gt;

&lt;p&gt;After apply the configurations, see the suggestions:&lt;/p&gt;

&lt;p&gt;Suggestions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;list clusters with: kops get cluster&lt;/li&gt;
&lt;li&gt;edit this cluster with: kops edit cluster kubernetes.filipemotta.me&lt;/li&gt;
&lt;li&gt;edit your node instance group: kops edit ig --name=kubernetes.filipemotta.me nodes-us-west-2a&lt;/li&gt;
&lt;li&gt;edit your master instance group: kops edit ig --name=kubernetes.filipemotta.me master-us-west-2a
According the suggestions, before apply, we can edit any configuration cluster, instance node group or instance master group. Let’s check or edit, for example, the cluster configuration:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops edit cluster kubernetes.filipemotta.me
etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
    - encryptedVolume: true
      instanceGroup: master-us-west-2b
      name: b
...
kubernetesApiAccess:
  - 0.0.0.0/0
...
  networkCIDR: 172.20.0.0/16
  networking:
    calico: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.20.32.0/19
    name: us-west-2a
    type: Public
    zone: us-west-2a
...
  topology:
    dns:
      type: Public
    masters: public
    nodes: public
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we can change then subnet, network and cluster CIDR the resourcers requested, restrict the access to the API and so many others.&lt;/p&gt;

&lt;p&gt;As the kops separate the configuration per availability zones, we shoud set up each of one if wanted. Supposing we want to change the type of the machine type or config the autoscalling group for the specific availabilty zone we can do it through this command:&lt;/p&gt;

&lt;p&gt;kops edit ig --name=kubernetes.filipemotta.me nodes-us-west-2b&lt;br&gt;
Note that we are editing the nodes-us-west-2b, that is, a node instance group in a specific availabilty zone. Kops created for us one instance group for each availabilty zone defined.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: InstanceGroup
...
  machineType: t3.medium
  maxSize: 1
  minSize: 1
...
  role: Node
  subnets:
  - us-west-2b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After review and edit any configuration you wanted, it’s time to apply these configurations to start the provisioning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops update cluster --name kubernetes.filipemotta.me --yes --admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...&lt;/p&gt;

&lt;p&gt;kOps has set your kubectl context to kubernetes.filipemotta.me&lt;/p&gt;

&lt;p&gt;Cluster changes have been applied to the cloud.&lt;br&gt;
...&lt;/p&gt;

&lt;p&gt;Yes!! It’s done! As you can see, kOps has set your kubectl context to kubernetes.filipemotta.me.&lt;/p&gt;

&lt;p&gt;After few minutes, you should have set a kubernete high available cluster in EC2 instances through kOps.&lt;/p&gt;

&lt;p&gt;You can see all the pods running in the cluster and you can check now your setup with calico network is deployed.&lt;/p&gt;

&lt;p&gt;Now we are going to deploy one more node to our cluster. It’s simple. Edit the node instance group file where you’ll want to deploy. In this case, I am going to deploy in us-west-2a. So, let’s edit and setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops edit ig --name=kubernetes.filipemotta.me nodes-us-west-2a
...
spec:
  machineType: t3.medium
  maxSize: 2
  minSize: 2
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t forget to apply these changes…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ kops update cluster --name kubernetes.filipemotta.me --yes --admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some changes requires rolling updates. If required you should use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ kops rolling-update cluster --yes  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can see that the new node was deployed in the specific availability zone and now be part of the cluster.&lt;/p&gt;

&lt;p&gt;Usually a common bottleneck of the control plane is the API server. As the number of pods and nodes grow, you will want to add more resources to handle the load. So let’s deploy a new api server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kops create ig --name=kubernetes.filipemotta.me new-apiserver --dry-run -o yaml &amp;gt; api-server.yaml

❯ cat api-server.yaml
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
...
  name: new-apiserver
spec:
  machineType: t3.micro
  maxSize: 1
  minSize: 1
  ...
  role: APIServer
  subnets:
  - us-west-2a
  - us-west-2b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, I changed the role option to APIServer, min and max size and the machineType to micro.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ kops create -f api-server.yaml
❯ kops update cluster --name kubernetes.filipemotta.me --yes    
❯ kops rolling-update cluster --yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, how changes enolves controlplanes nodes, its requires rolling-update option. Each master node will be shutdown applying the upgrade one at a time, but our cluster does not stay unavailable because we’ve setup high available cluster.&lt;/p&gt;

&lt;p&gt;Finally, at the end of this section, let’s delete one instance group node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ kops get instancegroups #get the instancegroups
❯ kops delete ig --name=kubernetes.filipemotta.me nodes-us-west-2b
Do you really want to delete instance group "nodes-us-west-2b"? This action cannot be undone. (y/N)
y
InstanceGroup "nodes-us-west-2b" found for deletion
I0716 15:15:48.767476   21651 delete.go:54] Deleting "nodes-us-west-2b"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final result is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❯ kubectl get nodes                
NAME                                          STATUS   ROLES                             AGE     VERSION
ip-172-20-101-63.us-west-2.compute.internal   Ready    node                              68m     v1.21.2
ip-172-20-108-27.us-west-2.compute.internal   Ready    api-server,control-plane,master   2m54s   v1.21.2
ip-172-20-56-77.us-west-2.compute.internal    Ready    node                              68m     v1.21.2
ip-172-20-59-46.us-west-2.compute.internal    Ready    node                              46m     v1.21.2
ip-172-20-60-40.us-west-2.compute.internal    Ready    api-server,control-plane,master   17m     v1.21.2
ip-172-20-91-25.us-west-2.compute.internal    Ready    api-server,control-plane,master   9m56s   v1.21.2
ip-172-20-93-4.us-west-2.compute.internal     Ready    api-server                        11m     v1.21.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploying Prometheus and Grafana in Kubernetes Cluster&lt;br&gt;
For testing purpose, I’ve deployed Prometheus and Grafana in AWS cluster. I’ve deployed a nginx ingress controller to public my app outside the cluster. I don’t cover here the deployed app either ingress config and install, but you can see the final result here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe ingress -n ingress-nginx     
Name:             ingress-host
Namespace:        ingress-nginx
Address:          a205622a4b5f24923fc8516-615762329.us-west-2.elb.amazonaws.com
Default backend:  default-http-backend:80 (&amp;lt;error: endpoints "default-http-backend" not found&amp;gt;)
Rules:
  Host                               Path  Backends
  ----                               ----  --------
  grafana.kubernetes.filipemotta.me  
                                     /   grafana:3000 (100.111.236.136:3000)
Annotations:                         &amp;lt;none&amp;gt;
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    15s (x3 over 10m)  nginx-ingress-controller  Scheduled for sync
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ingress exposed has the follow address: a205622a4b5f24923fc8516-615762329.us-west-2.elb.amazonaws.com. &lt;/p&gt;

&lt;p&gt;So, i’ve to setup the CNAME DNS grafana.kubernetes.filipemotta.me to this URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qwrqx5dl1gl2q7lz2wd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qwrqx5dl1gl2q7lz2wd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;And that’s it. We have successfully deployed a highly available and resilient Kubernetes cluster using Kops. This post has shown how to manage a Kubernetes cluster on AWS using kops. I guess Kops is an awesome tool to running a kubernetes production-grade cluster in AWS or others cloud providers. Try it !!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Backup your file server or any data using Hybrid Cloud - Storage Gateway Solution (AWS)</title>
      <dc:creator>Filipe Motta</dc:creator>
      <pubDate>Tue, 22 Jun 2021 17:23:11 +0000</pubDate>
      <link>https://dev.to/aws-builders/backup-your-file-server-or-any-data-using-hybrid-cloud-storage-gateway-solution-aws-1pn0</link>
      <guid>https://dev.to/aws-builders/backup-your-file-server-or-any-data-using-hybrid-cloud-storage-gateway-solution-aws-1pn0</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;You can use cloud storage for on-premises data backups to reduce infrastructure and administration costs. There is several ways to backup your on-premise infrastructure to cloud, here I’ll use hybrid cloud (part of your infrastructure is on the cloud and part of your infrastructure is on the on-premise) to accomplish it. In this post I’ll show AWS Storage Gateway solutions to bridge between on-premise data to cloud. The common use case is disaster recovery, backup &amp;amp; restore and tiered storage.&lt;/p&gt;

&lt;p&gt;There is 3 types of Storage Gateway:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0xrj6oodv66qjoucxwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0xrj6oodv66qjoucxwa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before show how I implemented it, I’ll show a brief overview about each one.&lt;/p&gt;

&lt;h4&gt;
  
  
  File Gateway
&lt;/h4&gt;

&lt;p&gt;Databases and applications are often backed up directly to a file server on-premises. You can now simply point these backups to the File Gateway, which copies the data to Amazon S3. Supports S3 standard, S3 IA, S3 One Zone IA so you can configure your bucket policy to move this data to any storage class in Amazon S3, depending your needs.&lt;/p&gt;

&lt;p&gt;Features:&lt;/p&gt;

&lt;p&gt;NFS/SMB protocol supports&lt;br&gt;
Lifecyles policies in Amazon S3&lt;br&gt;
Windows ACL&lt;br&gt;
Most recently used data is cached in the file gateway&lt;br&gt;
Support S3 Object Lock&lt;br&gt;
Bandwith optimized&lt;br&gt;
Can be mounted on many servers&lt;/p&gt;
&lt;h4&gt;
  
  
  Volume Gateway
&lt;/h4&gt;

&lt;p&gt;The Volume Gateway provides either a local cache or full volumes on premises while also storing full copies of your volumes in the AWS Cloud. Volume Gateway also provides Amazon EBS snapshots of your data for backup or disaster recovery.&lt;/p&gt;

&lt;p&gt;Features:&lt;/p&gt;

&lt;p&gt;Block storage using iSCSI protocol backed by S3&lt;br&gt;
Backed by EBS snapshots which can help restore on-premise volumes&lt;br&gt;
On-premises cache of recently accessed data&lt;br&gt;
Two types of Storage Volume Gateway: * Cached volumes: low latency access to most recent data * Stored volumes: entire dataset is on premise, scheduled backups to S3&lt;/p&gt;
&lt;h4&gt;
  
  
  Tape Gateway
&lt;/h4&gt;

&lt;p&gt;You can use Tape Gateway to replace physical tapes with virtual tapes in AWS. Tape Gateway acts as a drop-in replacement for tape libraries, tape media, and archiving services, without requiring changes to existing software or archiving workflows. Most used for enterprise backup purpose.&lt;/p&gt;

&lt;p&gt;Features:&lt;/p&gt;

&lt;p&gt;VirtualTape Library (VTL) backed by Amazon S3 and Glacier&lt;br&gt;
Back up data using existing tape-based processes (and iSCSI interface)&lt;br&gt;
Works with leading backup software vendors&lt;br&gt;
In Summary, if you use: File Gateway =&amp;gt; File access / NFS (backed by S3) Volume Gateway =&amp;gt; Volumes / Block Storage / iSCSI (backed by S3 with EBS snapshots) Tape Gateway =&amp;gt; VTLTape solution / Backup with iSCS (backed by S3 and Glacier)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvqvqqyzi1cleoigszst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvqvqqyzi1cleoigszst.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Volume Gateway Implementation
&lt;/h3&gt;

&lt;p&gt;As I said before, there are many ways to choose your strategy, each company has individual needs. Volume Gateway stores and manages on-premises data in Amazon S3 on your behalf and operates in either cache mode or stored mode. In my case, even though a backup partition is a file share, I chose the Volume Gateway type because of the following features:&lt;/p&gt;

&lt;p&gt;I need on-premises cache of recently accessed data to be accessed was needed (Provides low latency access to cloud-backed storage)&lt;br&gt;
I need backed by EBS snapshots which restore on-premise volumes when needed ( I used Volume Gateway in conjunction with Linux file servers on premises to provide scalable storage for on-premises file applications with cloud recovery options. I used a stored volume architecture, to store all data locally and asynchronously back up point-in-time snapshots to AWS.)&lt;br&gt;
With stored volumes you can store your primary data locally, while asynchronously backing up that data to AWS. Stored volumes provide your on-premises applications with low-latency access to their entire datasets. At the same time, they provide durable, offsite backups. You can create storage volumes and mount them as iSCSI devices from your on-premises application servers. Data written to your stored volumes is stored on your on-premises storage hardware. This data is asynchronously backed up to Amazon S3 as Amazon Elastic Block Store (Amazon EBS) snapshots. You can maintain your volume storage on-premises in your data center. That is, you store all your application data on your on-premises storage hardware. This solution is ideal if you want to keep data locally on-premises, because you need to have low-latency access to all your data, and also to maintain backups in AWS.&lt;/p&gt;

&lt;p&gt;See the following diagram about stored volumes architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv4daraefmi9msaqtg9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv4daraefmi9msaqtg9e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can deploy Volume Gateway as a virtual machine or on Ec2 instance. In this case, I have used as virtual machine architecture on premise infrastructure, but you can use as you want.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a Gateway Type
&lt;/h3&gt;

&lt;p&gt;Before you create a volume to store data, you need to create a gateway and specify the kind of gateway you’ll use.&lt;/p&gt;

&lt;p&gt;First, Open the AWS Management Console at &lt;a href="https://console.aws.amazon.com/storagegateway/home" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/storagegateway/home&lt;/a&gt;, and choose the AWS Region that you want to create your gateway in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sj4jjxkdogzzobuzokz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sj4jjxkdogzzobuzokz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it is necessary to choose a Host Platform and Downloading the VM appliance. You have to choose a hypervisor option, deploy the downloaded image to your hypervisor. Add at least one local disk for your cache and one local disk for your upload buffer during the deployment. See some requirements here:&lt;/p&gt;

&lt;p&gt;Hardware requirements for on-premises VMs&lt;/p&gt;

&lt;p&gt;When deploying your gateway on-premises, you must make sure that the underlying hardware on which you deploy the gateway VM can dedicate the following minimum resources:&lt;/p&gt;

&lt;p&gt;Four virtual processors assigned to the VM.&lt;/p&gt;

&lt;p&gt;16 GiB of reserved RAM for file gateways&lt;/p&gt;

&lt;p&gt;For volume and tape gateways, your hardware should dedicate the following amounts of RAM:&lt;/p&gt;

&lt;p&gt;16 GiB of reserved RAM for gateways with cache size up to 16 TiB&lt;/p&gt;

&lt;p&gt;32 GiB of reserved RAM for gateways with cache size 16 TiB to 32 TiB&lt;/p&gt;

&lt;p&gt;48 GiB of reserved RAM for gateways with cache size 32 TiB to 64 TiB&lt;/p&gt;

&lt;p&gt;80 GiB of disk space for installation of VM image and system data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxi3pr3gndx3x0lexd0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxi3pr3gndx3x0lexd0v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Depending your hypervisor and if you set on premise infrastructure, you have to check certain options according each hypervisor. In my case I have to setup the following:&lt;/p&gt;

&lt;p&gt;VMware Setup&lt;br&gt;
Store your disk using the Thick provisioned format option. When you use thick provisioning, the disk storage is allocated immediately, resulting in better performance. In contrast, thin provisioning allocates storage on demand. On-demand allocation can affect the normal functioning of AWS Storage Gateway. For Storage Gateway to function properly, the VM disks must be stored in thick-provisioned format.&lt;/p&gt;

&lt;p&gt;Configure your gateway VM to use paravirtualized disk controllers&lt;/p&gt;

&lt;p&gt;Now you have to choose a Service Endpoint. This is used to how your gateway will communicate with AWS storage services over the public internet. In my case I used public service endpoint.&lt;/p&gt;

&lt;p&gt;The next step is Connecting Your Gateway. To do this it is necessary to get the IP address or activation key of your gateway VM. So, in my vmware environment I have connected through console and set the IP address. So, to connect gateway, verify that your gateway VM is running for activation to succeed and if you can access the IP address that you have setup previously.&lt;/p&gt;

&lt;p&gt;Now you have to configure your gateway to use the disks that you have created when you deployed the appliance and according the requirements of the type of the gateway. As I have used for stored volume purpose, I had to configure the upload buffer according the requirements. Bellow, are table with Depending your hypervisor and if you set on premise infrastructure, you have to check certain options according each hypervisor. In my case I have to setup the following:&lt;/p&gt;

&lt;p&gt;VMware Setup&lt;br&gt;
Store your disk using the Thick provisioned format option. When you use thick provisioning, the disk storage is allocated immediately, resulting in better performance. In contrast, thin provisioning allocates storage on demand. On-demand allocation can affect the normal functioning of AWS Storage Gateway. For Storage Gateway to function properly, the VM disks must be stored in thick-provisioned format.&lt;/p&gt;

&lt;p&gt;Configure your gateway VM to use paravirtualized disk controllers&lt;/p&gt;

&lt;p&gt;Now you have to choose a Service Endpoint. This is used to how your gateway will communicate with AWS storage services over the public internet. In my case I used public service endpoint.&lt;/p&gt;

&lt;p&gt;The next step is Connecting Your Gateway. To do this it is necessary to get the IP address or activation key of your gateway VM. So, in my vmware environment I have connected through console and set the IP address. So, to connect gateway, verify that your gateway VM is running for activation to succeed and if you can access the IP address that you have setup previuosly.&lt;/p&gt;

&lt;p&gt;Now you have to configure your gateway to use the disks that you have created when you deployed the appliance and according the requirements of the type of the gateway. As I have used for stored volume purpose, I had to configure the upload buffer according the requirements. Bellow, are table with differents sizes requirements according the type you will use. size requirements according the type you will use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlxg02gqmm430zh0jyaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlxg02gqmm430zh0jyaq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a Volume
&lt;/h3&gt;

&lt;p&gt;Once the gateway was created, its time to create storage volume to which your applications read and write data. Previously, you have allocated disks to upload buffer in the case of stored volume - Volume Gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1uulrct1gwr71iymwt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1uulrct1gwr71iymwt0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see bellow, to create a volume in the storage gateway created, you need to select it, select the disk ID that you have create to stored data, in this case it depends which kind of hypervisor you are use.&lt;/p&gt;

&lt;p&gt;Note that in this step, if you would like to restore data from EBS volume you need to select it and specify the snap ID you want. You can use a existing disk too and new empty volume, which is our case.&lt;/p&gt;

&lt;p&gt;You need to specify ISCSI target volume too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxewarfjouztkm685i6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxewarfjouztkm685i6s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will be asked to configure chap authentication, a provider protection against playback attacks by requiring authentication to access storage volume targets. If you do no accept it, the volume will accept connections from any ISCSI initiator.&lt;/p&gt;

&lt;p&gt;Once created a volume, you will see the volumes availables to be mounted on your initiator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxib31mtxrr7496rfv16m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxib31mtxrr7496rfv16m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Using a Volume
&lt;/h3&gt;

&lt;p&gt;In my case, my purpose was used hybrid cloud to store samba files on-premise and have backup protection with snapshots on the cloud. So, I will show you how to create a partition to use a disk created previously.&lt;/p&gt;

&lt;p&gt;Depending which Linux Operation System distribution you are used, it is necessary to install a packager to manager ISCSI ( iscsi-initiator-utils - Red Hat Enterprise Linux Client )&lt;/p&gt;

&lt;p&gt;Once installed, discover the volume or VTL device targets defined for a gateway. Use the following discovery command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iscsiadm --mode discovery --type sendtargets --portal [GATEWAY_IP]:3260
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of the discovery command should like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[GATEWAY_IP]:3260,1 iqn.1997-05.com.amazon:part-home
[GATEWAY_IP]:3260,1 iqn.1997-05.com.amazon:disk-test-purpose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now its is necessary to connect to a target.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iscsiadm --mode node --targetname iqn.1997-05.com.amazon:[ISCSI_TARGET_NAME] --portal [GATEWAY_IP]:3260,1 --login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to replace the [ISCSI_TARGET_NAME] to value that you have setup previously and obsviously, the gateway IP, the command should like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iscsiadm --mode node --targetname iqn.1997-05.com.amazon:disk-test-purpose --portal [GATEWAY_IP]:3260,1 --login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The successful output should like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logging in to [iface: default, target: iqn.1997-05.com.amazon:disk-test-purpose, portal: [GATEWAY_IP],3260] (multiple)
Login to [iface: default, target: iqn.1997-05.com.amazon:disk-test-purpose, portal: [GATEWAY_IP],3260] successful.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the volume is attached to the client machine (the initiator). To do so, use the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls -l /dev/disk/by-path
total 0
lrwxrwxrwx 1 root root  9 abr  7 09:59 ip-[GATEWAY_IP]:3260-iscsi-iqn.1997-05.com.amazon:disk-test-purpose-lun-0 -&amp;gt; ../../sdd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the volume attached in our case is /dev/sdd, so we will format to use this device.&lt;/p&gt;

&lt;h3&gt;
  
  
  Formatting Your Volume using Logical Volumes
&lt;/h3&gt;

&lt;p&gt;Now lets use LVM to provides more flexibility and can also be resized dynamically when needed without any restarts.&lt;/p&gt;

&lt;p&gt;The first thing to do is to create physical disk on the device created previously on the storage gateway and initiated by the ISCSI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pvcreate /dev/sdd
Physical volume "/dev/sdd" successfully created.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the physical volume was created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pvdisplay /dev/sdd
  "/dev/sdd" is a new physical volume of "20,00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdd
  VG Name               
  PV Size               20,00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               Vy51TU-sSDDF-SSDFD-Hnzz-a0uP-u3WP-gI0E7C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you need to create volume group or add a existing one. For example, I already had a disk with 500GB allocated, if I need do expand it, I only need to add the disk to a existing volume group and expand a existing logical volume. But for now, we will create a volume group and create logical volume. Now, lets create a volume group and add physical volume created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vgcreate stg_gtw /dev/sdd
Volume group "stg_gtw" successfully created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vgdisplay stg_gtw
  --- Volume group ---
  VG Name               stg_gtw
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               &amp;lt;20,00 GiB
  PE Size               4,00 MiB
  Total PE              5119
  Alloc PE / Size       0 / 0   
  Free  PE / Size       5119 / &amp;lt;20,00 GiB
  VG UUID               ZYAOrS-zgsdd-To9k-SDDS-gwpn-xPzo-DEdFDF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, lets create a logical volume with 100% Volume group size (In this case 20GB):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lvcreate -l 100%FREE -n lv_disk_teste stg_gtw
Logical volume "lv_disk_teste" created.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lvdisplay /dev/stg_gtw/lv_disk_teste
  --- Logical volume ---
  LV Path                /dev/stg_gtw/lv_disk_teste
  LV Name                lv_disk_teste
  VG Name                stg_gtw
  LV UUID                EqsSDDe-OsGG4-a0Tr-djFFGX-fKSy-Enu6-jCDF1QV
  LV Write Access        read/write
  LV Creation host, time .local, 2021-04-07 10:36:17 -0300
  LV Status              available
  # open                 0
  LV Size                &amp;lt;20,00 GiB
  Current LE             5119
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once created, lets format the partition to store data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; mkfs.ext4 /dev/stg_gtw/lv_disk_teste
mke2fs 1.44.5 (15-Dec-2018)
Creating filesystem with 5241856 4k blocks and 1310720 inodes
Filesystem UUID: s23e234-569c-4fdf-a4b2-5856e790e3fa
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

&amp;gt; mkdir /mnt/disk-teste-purpose
&amp;gt; mount /dev/stg_gtw/lv_disk_teste /mnt/disk-teste-purpose
&amp;gt; df -h
Sist. Arq.                          Tam. Usado Disp. Uso% Montado em
udev                                2,0G     0  2,0G   0% /dev
tmpfs                               393M   26M  368M   7% /run
/dev/mapper/vg--root-lv--root        14G  2,9G   11G  22% /
tmpfs                               2,0G     0  2,0G   0% /dev/shm
tmpfs                               5,0M     0  5,0M   0% /run/lock
tmpfs                               2,0G     0  2,0G   0% /sys/fs/cgroup
/dev/mapper/storageGW-lv_volHomeGW  493G  398G   70G  86% /mnt/storageGW
tmpfs                               393M     0  393M   0% /run/user/0
/dev/mapper/stg_gtw-lv_disk_teste    20G   45M   19G   1% /mnt/disk-teste-purpose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, its done !! You can now store your files or any data (any database and so on) in the partition created and its was automatically backup to Amazon AWS with EBS Snapshots.&lt;/p&gt;

&lt;p&gt;The last tip, if you want to startup the initiator on boot you need to start node automatically editing the follow file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /etc/iscsi/nodes/iqn.1997-05.com.amazon:disk-test-purpose/[GATEWAY_IP],3260,1/default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;set the option node.startup to automatic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
node.startup = automatic
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obviouslly, you have to setup fstab to mount the file system on boot and have the scsci service to startup on boot too.&lt;/p&gt;

&lt;p&gt;Now you can create plan for EBS snapshot, so you can schedule your snapshots to be automatically created specifying the period and retention.&lt;/p&gt;

&lt;p&gt;I hope this post was useful for you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>linux</category>
    </item>
    <item>
      <title>Migrating a static web site from Amplify to S3 and Cloud Front</title>
      <dc:creator>Filipe Motta</dc:creator>
      <pubDate>Wed, 02 Jun 2021 18:50:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrating-a-static-web-site-from-amplify-to-s3-and-cloud-front-942</link>
      <guid>https://dev.to/aws-builders/migrating-a-static-web-site-from-amplify-to-s3-and-cloud-front-942</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Here in this post I am going to show you how I migrated my static web site built in HUGO framework hosted from AWS Amplify to host using S3 to store files and CloudFront to distribution content using a valid certificate.&lt;/p&gt;

&lt;p&gt;As AWS Amplify has a fully integration with HUGO framework, the build and deploy process is almost automatic, it is very easy, its only need to specify github's branch, give the right permissions and set the DNS properly. So on, the deploy is done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Bucket
&lt;/h3&gt;

&lt;p&gt;First all, you’ll need to create a S3 bucket. To host a static web site in S3 and use custom domain, the buckets name need to be the same of your custom domain, so I created two buckets according image bellow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a0n0slmdhqnu11mbh4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a0n0slmdhqnu11mbh4o.png" alt="buckets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After, you need to say the AWS Bucket that you will use it to host a static web page, so it is necessary to setup it. First, in the “Properties” tab in the bucket you neeed to enable the option called “Static Web Site Hosting” according the image bellow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u3rz02eruq1shbokj2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u3rz02eruq1shbokj2o.png" alt="Static Web Site Enabled"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Id": "XXXXXXXXXXXX",
    "Statement": [
        {
            "Sid": "Stmt1XXXXXXX026159",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::filipemotta.me/*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Id and Sid are hidden in my example. You can use “Generate Policy” option to generate it. The importants options are a Effect, Principal, Action and Resource. The Action need to set “GetObject” and in the Resource does not forget to add a "*" on the final of the resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migrate from GitHub Actions
&lt;/h3&gt;

&lt;p&gt;It all you’ll need to setup in your bucket. Now, how I needed to migrate the files in my github that had all integrations with AWS Amplify applying Continuous Integration and Deploying, I needed a way to upload the files to S3 bucket and build the facilities that I had in the AWS Amplify, such as Continuous integration and Deploy. To achieve this, I used a GitHub Actions to automatically upload files to S3 bucket when I push my code to github. So, I setup a file into the ".github/workflows" with the follow content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI/CD Upload Website

on:
  push:
    branches:
    - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-west-1'   # default us-east-1 - optional
        SOURCE_DIR: 'public'      # defaults  entire repository - optional
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every push code to main branch will upload files to S3 to my bucket ( See options push and branches ). The first action checkout code to ubuntu VM and the second action is upload and sync files to my S3 bucket. Remember that is necessary to setup the secrets of your git repository with your AWS credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcadg35rwy482aaav4r05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcadg35rwy482aaav4r05.png" alt="AWS Secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have to setup and pushed the code to github the files should now in the S3 Bucket. To test it, now in this step, It is a good idea to test the website access from the S3 bucket link. You can get the link in the properties tab on the specific bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use CloudFront to distribute content
&lt;/h3&gt;

&lt;p&gt;Once you have gained access, you have two options, the first of them is use a custom domain pointing out directly to S3 bucket through your DNS, but the first strategy only accepts http requests. The second one is to use a custom domain with a valid certificate and use Cloud Front to distribute the content. I choosed the second one because I wanted to use https in my website.&lt;/p&gt;

&lt;p&gt;So, the thing I had to do was to create a valid certificate through AWS Certificate Manager to include it on the CloudFront settings. To do it, I requested a certificate in the AWS Certificate Manager.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg6lipkrctyh0ya9bqmy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg6lipkrctyh0ya9bqmy.png" alt="Certificate"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One note to take it is that when I wrote this post, the only region available to create a valid certificate to use with Cloud Front is Virginia (us-east-1). So attend for this note.&lt;/p&gt;

&lt;p&gt;So, the next step is to set up the cloudfront distribution to use a valid certificate created early and use the S3 bucket that now are integrated with your github repository.&lt;/p&gt;

&lt;p&gt;Please, for now you’ll need to “Create a Distribution” option on the cloud front. The first important option is to select the S3 Bucket in the “Origin Domain Name” option. Another important option is redirect requests HTTP to HTTPS according image bellow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xe1amgegib7z05ug0ox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xe1amgegib7z05ug0ox.png" alt="Cloud Front"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you need to select the “Price Class” and Alternate Domain Names according your needed and another important option is to select a Custom SSL Certificate. Now the certificate that you created on the Virginia region should appear here or you can upload your own Certificate. In my case I have chosen the option ACM (AWS Certification Manager) available. See bellow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1r7vc1ynnuqdkanb42hg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1r7vc1ynnuqdkanb42hg.png" alt="Cloud Front"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup the DNS to configure your custom Domain
&lt;/h3&gt;

&lt;p&gt;The last step that you have to do is setup your DNS to use your Custom Domain. To do this, you have to adding a CNAME record to Domain Name value CloudFront. In my case I have setup &lt;a href="http://www.filipemotta.me" rel="noopener noreferrer"&gt;www.filipemotta.me&lt;/a&gt; to this value. How I would like too setup to access root domain ( filipemotta.me ), how is it known, it cannot create CNAME record to root domain, so I setup a Alias record no my root domain (filipemotta.me) to access the cloudfront domain value.&lt;/p&gt;

&lt;p&gt;When I have finished my setup I had small problems to access my subdirectories in my domain (denied access). To solve this, I have to edit my CloudFront settings. In Origin and Domain Name Path option, I have to setup my fully S3 Domain name value instead the value selected when I created my CloudFront distributions. Take this option in S3 properties tab and set the Origin and Domain Name Path value. After this, I got access all my subdirectories and now can access throught https and http using S3 to static website and Cloudfront distributions.&lt;/p&gt;

&lt;p&gt;See the discussion in the stackoverflow about this issue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;According to the discussion on AWS Developer Forums: Cloudfront domain redirects to S3 Origin URL, it takes time for DNS records to be created and propagated for newly created S3 buckets. The issue is not visible for buckets created in US East (N. Virginia) region, because this region is the default one (fallback).

Each S3 bucket has two domain names, one global and one regional, i.e:

global — {bucket-name}.s3.amazonaws.com
regional — {bucket-name}.s3.{region}.amazonaws.com
If you configure your CloudFront distribution to use the global domain name, you will probably encounter this issue, due to the fact that DNS configuration takes time.

However, you could use the regional domain name in your origin configuration to escape this DNS issue in the first place.

font: https://stackoverflow.com/questions/38735306/aws-cloudfront-redirecting-to-s3-bucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I have too a fully continuous integration and deploy using github actions when I push my code, similar I had in AWS Amplify.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>github</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
