<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ivy Jeptoo</title>
    <description>The latest articles on DEV Community by Ivy Jeptoo (@jeptoo).</description>
    <link>https://dev.to/jeptoo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jeptoo"/>
    <language>en</language>
    <item>
      <title>How to Create AWS EKS Cluster</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Thu, 14 Sep 2023 08:19:08 +0000</pubDate>
      <link>https://dev.to/jeptoo/how-to-create-aws-eks-cluster-1fil</link>
      <guid>https://dev.to/jeptoo/how-to-create-aws-eks-cluster-1fil</guid>
      <description>&lt;p&gt;Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service provided by Amazon Web Services (AWS). It simplifies the deployment, management, and scaling of containerized applications using Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantage of EKS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;EKS is &lt;strong&gt;fully managed&lt;/strong&gt;, so AWS handles control plane maintenance, scaling, and updates, allowing you to focus on your applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It offers &lt;strong&gt;high availability&lt;/strong&gt; across multiple AWS Availability Zones, ensuring uptime and fault tolerance for your Kubernetes clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;, It integrates with AWS IAM for authentication and authorization, and you can apply IAM policies for fine-grained access control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EKS seamlessly &lt;strong&gt;integrates with AWS services&lt;/strong&gt;, simplifying application deployments and operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EKS is &lt;strong&gt;user-friendly&lt;/strong&gt;, compatible with standard Kubernetes tools, and simplifies Kubernetes cluster management.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Elastic Kubernetes Service(EKS) can be created in two ways,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Web Console&lt;/li&gt;
&lt;li&gt;AWS CLI tool&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  I &lt;strong&gt;Web Console&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;i) Ensure you have a default VPC. This will automatically createa size /20 default subnet in each availability zone. If you don't have one follow &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-vpc" rel="noopener noreferrer"&gt;this instructions&lt;/a&gt; to create one.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A default VPC will look like so:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few4jnl76qfujy9i44nzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few4jnl76qfujy9i44nzk.png" alt="pre"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ii) Create an IAM role that your cluster and the node group will assume. A role is a set of permissions to be assigned to an entity.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Below are the steps you will follow after selecting create a new role 

&lt;ul&gt;
&lt;li&gt;Click on the Create role button to start the wizard.&lt;/li&gt;
&lt;li&gt;Choose AWS service as the trusted entity.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtn0uac0rhimflewg4jr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtn0uac0rhimflewg4jr.png" alt="iam"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the EKS to see EKS use cases. (See the snapshot below)&lt;/li&gt;
&lt;li&gt;Choose EKS - Cluster. It will allow access to other AWS service resources that are required to operate clusters managed by EKS. Click Next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2pgd79zykvszgf4wzv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2pgd79zykvszgf4wzv0.png" alt="Iam"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The needed policy, AmazonEKSClusterPolicy, will be selected. This policy provides Kubernetes the permissions it requires to manage resources on your behalf.&lt;/li&gt;
&lt;li&gt;Click Next, and ignore the Tags.&lt;/li&gt;
&lt;li&gt;Click Next, and name the role &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;iii) Create an IAM role for the worker nodes, this wil give permisssion to kubelet running on the worker node to make calls to other APIs on your behalf. The steps will be the same as above only that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Use case you will select EC2 instead of EKS case.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwp26ukgjmomqmkqz2or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwp26ukgjmomqmkqz2or.png" alt="iam"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the attach policy you need to give choose the following

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AmazonEKSWorkerNodePolicy&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AmazonEC2ContainerRegistryReadOnly&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AmazonEKS_CNI_Policy&lt;/code&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjas5m6w2l8nc9owq99dl.png" alt="policy"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;iv) Create an SSH key Pair that we'll use to log into EC2 Instance, the public key is placed automatically on the EC2 instances, whereas you use the private key instead of a password to access your instances securely.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To create, go to EC2 service → Networkk &amp;amp; Security → Key Pairs. 

&lt;ul&gt;
&lt;li&gt;Click on Create key pair &lt;/li&gt;
&lt;li&gt;name your key pair then chose a format. (.pem format is used by Mac/Linux users, and a .ppk format is used by Windows users.)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02robs4l7rpz8i89rz0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02robs4l7rpz8i89rz0q.png" alt="ssh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;private key file will be downloaded locally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxam02t0huwkfyh1ptum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxam02t0huwkfyh1ptum.png" alt="kp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create EKS Cluster
&lt;/h3&gt;

&lt;p&gt;An EKS cluster consists of:&lt;br&gt;
&lt;strong&gt;Control place&lt;/strong&gt; which has nodes running the K8 software like the kubernetes API and the etcd which run in AWS-owned accounts.&lt;br&gt;
&lt;strong&gt;Data plane&lt;/strong&gt; is made up of worker nodes which run in customer accounts.&lt;/p&gt;

&lt;p&gt;Create a Control Plane&lt;br&gt;
&lt;strong&gt;Step 1&lt;/strong&gt;&lt;br&gt;
Under EKS Service→ Amazon EKS→ Clusters, click on create cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give your cluster a name and choose kubernetes version, select the IAM role we created earlier 
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsx6z7ud52ld8kkl3i9g.png" alt="cluster"&gt;
&lt;strong&gt;Step 2&lt;/strong&gt;
Chose the default VPC, subnets and security group  in your account. Mark the cluser endpoints as public.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zuosyi8wf83bszwq1ix.png" alt="vpc"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;step 3&lt;/strong&gt;&lt;br&gt;
Accept the default set for the rest of the steps and create the cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a Node Group
&lt;/h3&gt;

&lt;p&gt;Node groups are worker nodes(VMs) used to run the pods that your cluster will be serving. We'll create a node group and attach it to the cluster.&lt;br&gt;
&lt;strong&gt;step 1&lt;/strong&gt;&lt;br&gt;
Once the cluster that we created earlier is Active, click on the name for more details&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3u1aajqrgs3uqrcnk0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3u1aajqrgs3uqrcnk0l.png" alt="nodeG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step 2&lt;/strong&gt;&lt;br&gt;
Click on  Compute under the new cluster the click on Add Node Group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu0nklqdyz0fjtlgjsoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu0nklqdyz0fjtlgjsoc.png" alt="node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;&lt;br&gt;
Give it a name then attache the IAM node role we created earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnzpc4ercvt4nu8zevwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnzpc4ercvt4nu8zevwe.png" alt="name"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;&lt;br&gt;
Under Node group and compute and Scaling Configuration, choose the OS,hardware config and worker node count.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
    &lt;th&gt;Field&lt;/th&gt;
    &lt;th&gt;Value&lt;/th&gt;
    &lt;th&gt;Purpose&lt;/th&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;AMI type&lt;/td&gt;
    &lt;td&gt;Amazon Linux 2 (AL2_x86_64)&lt;/td&gt;
    &lt;td&gt;OS&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Capacity type&lt;/td&gt;
    &lt;td&gt;On-Demand&lt;/td&gt;
    &lt;td&gt;Instance purchasing option&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Instance types&lt;/td&gt;
    &lt;td&gt;t3.micro&lt;/td&gt;
    &lt;td&gt;2 vCPU, 1 GiB memory&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Disk size&lt;/td&gt;
    &lt;td&gt;20 GiB&lt;/td&gt;
    &lt;td&gt;---&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Scaling configuration&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Min size&lt;/td&gt;
    &lt;td&gt;2&lt;/td&gt;
    &lt;td&gt;Min number of nodes for scaling in.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Max size&lt;/td&gt;
    &lt;td&gt;2&lt;/td&gt;
    &lt;td&gt;Max number of nodes for scaling out.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Desired size&lt;/td&gt;
    &lt;td&gt;2&lt;/td&gt;
    &lt;td&gt;Initial count&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskdqdmaheztq07gpy1n9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskdqdmaheztq07gpy1n9.png" alt="table"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;step 5&lt;/strong&gt;&lt;br&gt;
Choose the subnets we created earlier while creating the cluster and also choose the SSH key pair we created earlier. Allow remote access from anywhere on the internet.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptna5ppjcvvt92x4uulo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptna5ppjcvvt92x4uulo.png" alt="subnet"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Clean Up
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Delete the Node Group. Explore how you'd do the deletion.
If you need help, refer to the instructions here.&lt;/li&gt;
&lt;li&gt;Delete the cluster.&lt;/li&gt;
&lt;li&gt;Delete the custom IAM roles you have created in this exercise.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  II &lt;strong&gt;AWS CLI Tool&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Creating EKS using the AWS CLI involves resources and is way &lt;code&gt;ekctls&lt;/code&gt; CLI is used to to simplify cluster creation. eksctl uses services of AWS CloudFormation internally to create clusters on AWS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;AWS CloudFormation is an AWS service for creating, managing, and configuring ANY resource on the AWS cloud using a YAML/JSON script. In the script file, you can define the properties of the resource you want to create in the cloud.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the case of a simple cluster, eksctl will not need to create a script but for a more complex one you will be needed to a minimal YAML script.&lt;/p&gt;
&lt;h3&gt;
  
  
  eksctl Installation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Windows&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install Chocolatey. Refer to the https://chocolatey.org/install  for detailed steps
Set-ExecutionPolicy AllSigned 
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
# Exit and re-run Powershell as an Admin
chocolatey install eksctl
# Verify
choco -?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Mac OS&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check Homebrew 
brew --version
# If you do not have Homebrew installed - https://brew.sh/ 
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install eksctl
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you face any error due to ownership permission, you can change the ownership of those directories to your user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chown -R $(whoami) /usr/local/&amp;lt;directory_name&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create a basic cluster
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Once you have you have installed eksctl, create  a basic cluster,
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cluster will generate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An auto-generated name&lt;/li&gt;
&lt;li&gt;Two m5.large worker nodes. Recall that the worker nodes are the virtual machines, and the m5.large type defines that each VM will have 2 vCPUs, 8 GiB memory, and up to 10 Gbps network bandwidth.&lt;/li&gt;
&lt;li&gt;Use the Linux AMIs as the underlying machine image&lt;/li&gt;
&lt;li&gt;Your default region
A dedicated VPC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can specify it on one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster --name myCluster --nodes=4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create an advanced cluster
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;you will need to write the configurations in a YAML file separately then run
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster --config-file=&amp;lt;path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  List the details
&lt;/h4&gt;

&lt;p&gt;This is specific to a cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl get cluster [--name=&amp;lt;name&amp;gt;][--region=&amp;lt;region&amp;gt;]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Delete Cluster
&lt;/h4&gt;

&lt;p&gt;This will delete a cluster and all the resources associated to it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl delete cluster --name=&amp;lt;name&amp;gt; [--region=&amp;lt;region&amp;gt;]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>eks</category>
      <category>microservices</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How to deploy AWS Serverless Application in Cloud9</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Tue, 12 Sep 2023 09:46:39 +0000</pubDate>
      <link>https://dev.to/jeptoo/how-to-deploy-aws-serverless-application-in-cloud9-5265</link>
      <guid>https://dev.to/jeptoo/how-to-deploy-aws-serverless-application-in-cloud9-5265</guid>
      <description>&lt;p&gt;Howdy! &lt;/p&gt;

&lt;p&gt;In this tutorial we are going to initialize, build and deploy a simple Hello world application using AWS Serverless Application. &lt;br&gt;
The Hello World application contains resources which includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt; – Function that processes the HTTP API GET request and returns a hello world message.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Identity and Access Management (IAM) role&lt;/strong&gt; - Provisions permissions for the services to securely interact.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provisions permissions for the services to securely interact&lt;/strong&gt; - API endpoint that you will use to invoke your function.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Before installing AWS SAM CLI ensure that

&lt;ul&gt;
&lt;li&gt;Create an AWS account, AWS Identity, IAM credials and access key pair.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Install AWS CLI&lt;/li&gt;
&lt;li&gt;Use AWS CLI to configure AWS credentials&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Initialize Hello World Application
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We are going to use the AWS SAM CLI to create the application on our Cloud9.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;i). Run the command below on your desired directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ii). The prompts will guide you through the initializing a new application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select AWS Quick Start Templates to choose a starting template.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the Hello World Example template and download it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use the Python3.8 runtime and image package type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Opt out of AWS X-Ray tracing. See What is AWS X-Ray? in the &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html" rel="noopener noreferrer"&gt;AWS X-Ray Developer Guide&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Opt out of monitoring with Amazon CloudWatch Application Insights. Learn about &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-application-insights.html" rel="noopener noreferrer"&gt;Amazon CloudWatch Application.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name your application &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use the image below for reference&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffks113vxetadt5guh30h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffks113vxetadt5guh30h.png" alt="screenshot1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;cont:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqiews0h41pussybpbbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqiews0h41pussybpbbd.png" alt="screenshot2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;iii). The AWS SAM CLI then downloads your the template then creates the application directory.&lt;/p&gt;

&lt;p&gt;iv). Navigate to the newly created directory. Now, let's breakdown the files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;hello_world/app.py&lt;/strong&gt; – Contains your Lambda function code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hello_world/requirements.txt&lt;/strong&gt; – Contains any Python dependencies that your Lambda function requires.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;samconfig.toml&lt;/strong&gt; – Configuration file for your application that stores default parameters used by the AWS SAM CLI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;template.yaml&lt;/strong&gt; – The AWS SAM template that contains your application infrastructure code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Build the Application
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Building the app will dockerize the lambda function in cloud9 environment. Use the command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After building AWS SAM CLI creates a &lt;code&gt;.aws-sam&lt;/code&gt;directory,it organizes your function dependencies, project code and files there.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.aws-sam
├── build
│   ├── HelloWorldFunction
│   │   ├── __init__.py
│   │   ├── app.py
│   │   └── requirements.txt
│   └── template.yaml
└── build.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this is what each file does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;build/HelloWorldFunction&lt;/strong&gt; – Contains your Lambda function code and dependencies. The AWS SAM CLI creates a directory for each function in your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;build/template.yaml&lt;/strong&gt; – Contains a copy of your AWS SAM template that is referenced by AWS CloudFormation at deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;build.toml&lt;/strong&gt; – Configuration file that stores default parameter values referenced by the AWS SAM CLI when building and deploying your application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5siirb8am2jy932d52um.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5siirb8am2jy932d52um.png" alt="build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy the application to an ECR image repository
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Before building ensure that you have configured your AWS credentials.&lt;br&gt;
The application files will be uploaded to S3 and the SAM template is transformed into a CloudFormation. The template is then uploaded the template to the AWS CloudFormation service to provision your AWS resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To deploy, run the command below:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Be sure to configure the deployment steps here is an example of how to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qykbl9yb2nm5am6dxwc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qykbl9yb2nm5am6dxwc.png" alt="deploy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Confirm the deployment changes&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r88fgmaf4u27eqgblw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r88fgmaf4u27eqgblw4.png" alt="deployment"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After successful deployment an API gateway endpoint URL that you can &lt;code&gt;curl&lt;/code&gt; or paste in a browser to see the function output. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Alternatively&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can use the &lt;code&gt;sam list endpoints-output json&lt;/code&gt; command to get information
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3zgk0z1zgu8810q7vuo.png" alt="Image description"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ensure that you are in the correct directory before running the testing command.
Run the command to test the function
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam local invoke
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The expected output is as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjn2q31gquq6be476gre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjn2q31gquq6be476gre.png" alt="SAM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since &lt;code&gt;sam local invoke&lt;/code&gt; does not test API we are going to use the command below.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam local start-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpzkstgsvibavbcx0uon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpzkstgsvibavbcx0uon.png" alt="api test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check on what we are calling, you can open a new terminal the run curl and the endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://127.0.0.1:3000/hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The payload is successful as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdxjwf99woyxb3yz1h29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdxjwf99woyxb3yz1h29.png" alt="curl api"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf6vax3l2f4c6r1mtz58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf6vax3l2f4c6r1mtz58.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the basis of getting started with SAM!&lt;br&gt;
Happy Coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>EC2 Instance with an Admin Role</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Tue, 18 Jul 2023 13:08:22 +0000</pubDate>
      <link>https://dev.to/jeptoo/ec2-instance-with-an-admin-role-2072</link>
      <guid>https://dev.to/jeptoo/ec2-instance-with-an-admin-role-2072</guid>
      <description>&lt;p&gt;Howdy!!! &lt;br&gt;
Some time back we learnt how to create and EC2 Instance and we even connected to it, don't remember? Check it out &lt;a href="https://dev.to/jeptoo/how-to-create-ec2-instance-ubuntu-2204-on-aws-and-connect-via-ssh-using-pem-492o"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Today we are going to create an Instance based on Amazon Linux AMI that will be connected via SSH. Using Security Groups, you will ensure that access to the instance is limited to your IP address only.&lt;/li&gt;
&lt;li&gt;The CLI will be pre-installed on the instance by default. This instance only needs permissions assigned. Once the instance is up and running, create an IAM role with admin access for your account. Add the role to your EC2 after that.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Content
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Objectives&lt;/li&gt;
&lt;li&gt;Create a default Virtual Private Cloud&lt;/li&gt;
&lt;li&gt;Launch an EC2 instance&lt;/li&gt;
&lt;li&gt;Create an IAM Role&lt;/li&gt;
&lt;li&gt;Attach the Role to the EC2 Instance&lt;/li&gt;
&lt;li&gt;Connect to your EC2 instance&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;p&gt;By the end of this article, you'll be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch a secure EC2 instance.&lt;/li&gt;
&lt;li&gt;Create IAM role with admin previleges&lt;/li&gt;
&lt;li&gt;Attach the IAM role to the your Instance.&lt;/li&gt;
&lt;li&gt;Connect to your EC2 instance via SSH&lt;/li&gt;
&lt;li&gt;Use CLI tool in the Instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create a default Virtual Private Cloud
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html"&gt;VPC&lt;/a&gt; is a private cloud computing environment contained  within a public cloud and once can launch AWS resources in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check in your account if you already have a &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html"&gt;default VPC&lt;/a&gt; and if not, go to the VPC dashboard and create a default VPC.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZXikXYeg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j15pkcj57gk5u0rky0ge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZXikXYeg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j15pkcj57gk5u0rky0ge.png" alt="VPC" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Launch an EC2 instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I already have an article that covers &lt;a href="https://dev.to/jeptoo/how-to-create-ec2-instance-ubuntu-2204-on-aws-and-connect-via-ssh-using-pem-492o"&gt;launching EC2&lt;/a&gt; but just to touch briefly on the steps:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GZ9WfmtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90zm7pzr0kpwsfjmqg5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GZ9WfmtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90zm7pzr0kpwsfjmqg5d.png" alt="configurations" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under security limit access to your IP address only.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--INSiu4tZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r135izcxikmun108h37e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--INSiu4tZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r135izcxikmun108h37e.png" alt="access" width="790" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you do not have a SSH key be sure to download a new one. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;N/B&lt;/strong&gt;&lt;/u&gt;&lt;br&gt;
&lt;strong&gt;This key-pair will allow you to log into your instance, using SSH, from your local machine. Save the key-pair carefully, because the same private key cannot be re-generated.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once you have launched your Instance, verify that it is running successfully.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create an IAM Role
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html"&gt;Identity and Access Management&lt;/a&gt; is used to specify who and what can access services and resources in AWS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the IAM dashboard, select &lt;strong&gt;Roles&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Create role&lt;/strong&gt; button&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bBZnQPQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0m6de35tvg1s08bnas45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bBZnQPQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0m6de35tvg1s08bnas45.png" alt="create role" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;AWS service&lt;/strong&gt; as the trusted entity and &lt;strong&gt;EC2&lt;/strong&gt; as the use case. This will allow the instance to whom the role will be attached to to be able to call any AWS service on your behalf.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Tu6L9h9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lu55zvm2cezy0ix9rsgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Tu6L9h9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lu55zvm2cezy0ix9rsgo.png" alt="role1" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under the permissions, search for &lt;strong&gt;AdministratorAccess&lt;/strong&gt; in &lt;strong&gt;Filter policies&lt;/strong&gt; textbox to apply to the role.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xwjkLawN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/et8vlxt6ou0qf7vlszn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xwjkLawN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/et8vlxt6ou0qf7vlszn7.png" alt="permissions" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under the review section, provide a name to the new role.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WUfsTJJM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12uhvquxf4giab3r4c6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WUfsTJJM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12uhvquxf4giab3r4c6u.png" alt="role name" width="800" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Attach the Role to the EC2 Instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;On the EC2 dashboard, check on the running instances and select the checkbox on the Instance we created earlier.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Actions&lt;/strong&gt; button which will open a drop-down options, select the &lt;strong&gt;Security&lt;/strong&gt; → &lt;strong&gt;Modify IAM role&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pyg_MZ7N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/79ng429c6qmuhpeciaq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pyg_MZ7N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/79ng429c6qmuhpeciaq0.png" alt="attach" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select and apply the newly created role to your Instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---sMxYs7_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rpgkvj5uv21hu89ehhk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---sMxYs7_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rpgkvj5uv21hu89ehhk6.png" alt="select role" width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to your EC2 instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We are going to connect to the EC2 instance using SSH, under &lt;strong&gt;Actions&lt;/strong&gt;, click on &lt;strong&gt;Connect&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_U9Y0Hwr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mh4akhvhd65azu9ay86s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_U9Y0Hwr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mh4akhvhd65azu9ay86s.png" alt="connect" width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow the SSH steps to connect to the Instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KEGM4lTm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4ky87sq0t0fhf7n6kl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KEGM4lTm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4ky87sq0t0fhf7n6kl3.png" alt="ssh" width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After connecting to the instance, you need to verify installation of AWS CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pa357rmK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7f1ijg0ndebedxgeyobx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pa357rmK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7f1ijg0ndebedxgeyobx.png" alt="terminal" width="734" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This is a practical method for having a well configured, secure server that you can use for testing without worrying about credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Happy cloud  adventures!&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Cloud Fundamentals</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Thu, 13 Jul 2023 17:00:55 +0000</pubDate>
      <link>https://dev.to/jeptoo/cloud-fundamentals-a-comprehensive-guide-49ng</link>
      <guid>https://dev.to/jeptoo/cloud-fundamentals-a-comprehensive-guide-49ng</guid>
      <description>&lt;p&gt;&lt;strong&gt;Cloud Computing&lt;/strong&gt; is the delivery of computing services (servers, storage, database, networking, software etc) over the internet using pay-as-you-go pricing. With this you can access technology services depending with your needs on cloud providers like Amazon Web Service, Google Cloud Platform, Microsoft Azure etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages of Cloud Computing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Effectiveness&lt;/strong&gt; - you pay for the resources that you use this helps avoiding overbuilding and over provisioning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt; - Top notch security is provided through Virtual private cloud, encryption and API keys which help keep data secure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; - You can easily scale resources and storage up to meet your business demands without investing in physical infrastructure, one can also scale down if resources aren't being used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility&lt;/strong&gt; -  It allows users to select the operating systems, language, database, and other services as per their requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;you can read more on the advantages &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Content
&lt;/h2&gt;

&lt;p&gt;Compute&lt;br&gt;
 Database&lt;br&gt;
 Security&lt;br&gt;
 Networking&lt;br&gt;
 Messaging&lt;br&gt;
 Management Services&lt;br&gt;
 Conclusion&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview of AWS Cloud Fundamentals
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS provides several services which include: Compute, Storage, Security, Networking, Messaging, management services etc. Many more services are provided but we'll discuss the above.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Compute
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EC2&lt;/strong&gt; - provides secure, resizable capacity in the cloud based on the user requirements. It can shrink  or expand resources in accordance to the load.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Elastic Beanstalk&lt;/strong&gt; - scales and deploys web applications made in programming languages like java, python, ruby. It deals with capacity provisioning, load balancing and auto scaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon LightSail&lt;/strong&gt; - It enables Virtual Private Server(VPS) to be launched and managed with ease. It automatically deploys and manages the computer, storage, and networking capabilities required to run your applications. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EKS (Elastic Container Service for Kubernetes)&lt;/strong&gt; - The tool allows you to Kubernetes on Amazon cloud environment without installation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt; - This AWS service allows you to run functions in the cloud. The tool is a big cost saver for you as you to pay only when your functions execute.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jwlNYTVC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7j30bugajw3l5evg66ur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jwlNYTVC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7j30bugajw3l5evg66ur.png" alt="AWS compute" width="535" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More on &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-overview/compute-services.html"&gt;compute services&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS S3 (Simple Storage Service)&lt;/strong&gt; - an object storage that can store and retrieve data from anywhere: websites, mobile apps, IoT sensors, and so on. It is durable, provides comprehensive security, and flexibility in managing data.&lt;br&gt;
&lt;em&gt;Amazon Glacier&lt;/em&gt; - is used for archiving data and long-term backup. The glacier is used for data archiving and long term backup. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EBS(Elastic Book Store)&lt;/strong&gt; -  provides block storage volumes for instances of Amazon EC2. EBS is a reliable storage volume that can be attached to any running instance that is in the same availability zone. &lt;em&gt;I wrote an article on &lt;a href="https://dev.to/jeptoo/elastic-block-storeebs-1862"&gt;EBS &lt;/a&gt;&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Elastic File System&lt;/strong&gt; - provides elastic file storage, which can be used with AWS Cloud Services and resources that are on-premises. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XnHR_JpG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfia0znyl124h74i1evq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XnHR_JpG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfia0znyl124h74i1evq.png" alt="aws storage" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;More on &lt;a href="https://aws.amazon.com/ebs/?c=s&amp;amp;sec=srv"&gt;storage&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Database
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Amazon RDS(Relational Database Service)&lt;/strong&gt; - Helps in administration and management of database It frees managing the hardware and enables us to focus on the application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB&lt;/strong&gt; - is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Redshift&lt;/strong&gt; -  users to analyze their data using SQL. It is a fast, fully managed data warehouse. It also allows users to run complex analytical queries against structured data using sophisticated query optimizations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RbkxQqR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6z52ui25gz7mrlsk0asv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RbkxQqR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6z52ui25gz7mrlsk0asv.png" alt="database" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;More on &lt;a href="https://aws.amazon.com/products/databases/"&gt;database&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identity and access management (IAM)&lt;/strong&gt; - allows one to configure who can access AWS account. It controls access to resources by authenticating and authorizing  users, apps and services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Web Access Control List&lt;/strong&gt; - monitors HTTPs requests for AWS resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Firewall Manager&lt;/strong&gt; - Firewall is a network security mechanism that monitors and controls incoming and outgoing traffic while the firewall manager allows configuration and management of firewall rules access accounts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Shield&lt;/strong&gt; - provides continuous DDoS(Distributed Denial of Service) attack detection and automatic mitigations. This safeguards applications running on AWS.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;More on &lt;a href="https://aws.amazon.com/products/security/"&gt;security&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Networking
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Elastic Load Balancing&lt;/strong&gt; - automatically diverts traffic to multiple targets, balances  load between servers and also provide redundancy and performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Autoscaling&lt;/strong&gt; - automatially adjust resource usage to ensure steady performance at the lowest cost while &lt;em&gt;EC2 autoscaling&lt;/em&gt; monitors EC2 instance and automatically adjusts by adding or removing EC2 instance based on conditions you define.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route 53&lt;/strong&gt; - is a cloud domain name system that  has servers distributed around the globe, it routes end users to internet application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;More on &lt;a href="https://aws.amazon.com/products/networking/"&gt;networking&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Messaging
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SNS (Simple Notification Service)&lt;/strong&gt; - allows sending of notification to users on your application(large subscribers) through various protocols like email, SMS, and HTTP/S.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SQS (Simple Queue Service)&lt;/strong&gt; - allows you to integrate queuing functionality in the application. It can be Standard of FIFO(First In First Out).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Queuing&lt;/strong&gt; is a data structure that holds requests(messages). Processing data using FIFO improves scalability, performance, user experience.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;More on &lt;a href="https://aws.amazon.com/messaging/"&gt;messaging&lt;/a&gt;&lt;/em&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Management Services
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CloudWatch&lt;/strong&gt; - monitors and manages AWS resources and application by collecting data in form of logo, metric and events&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CloudFormation&lt;/strong&gt; - models infrastructure text files template allowing provisioning of AWS resources based on the written scripts. &lt;em&gt;Infrastructure as code&lt;/em&gt; allows description and provision of all infrastructure resources in cloud environment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cloud fundamentals are essential to grasp the basics of cloud computing, a transformative technology for data storage, access, and processing. Understanding cloud fundamentals is crucial for leveraging scalability, flexibility, cost-efficiency, and collaboration benefits. It enables informed decision-making, selection of service models, and consideration of security and privacy. Additionally, it facilitates cloud-native application development, big data utilization, and effective disaster recovery planning. Mastering cloud fundamentals is vital for competitiveness in the digital landscape.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Elastic Block Store(EBS)</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Fri, 23 Jun 2023 14:42:46 +0000</pubDate>
      <link>https://dev.to/jeptoo/elastic-block-storeebs-1862</link>
      <guid>https://dev.to/jeptoo/elastic-block-storeebs-1862</guid>
      <description>&lt;p&gt;We previously learnt how to create an EC2 instance and connect to it. you can recap &lt;a href="https://dev.to/jeptoo/how-to-create-ec2-instance-ubuntu-2204-on-aws-and-connect-via-ssh-using-pem-492o"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While still on this there are two types of storage in EC2:&lt;/p&gt;

&lt;p&gt;i) &lt;strong&gt;In Memory(instance store)&lt;/strong&gt; - which provides temporary block-level storage that is directly attached to the EC2 instance. It is ideal for temporary data, cache, or scratch space that doesn't need to persist beyond the lifespan of the instance.&lt;/p&gt;

&lt;p&gt;ii) &lt;strong&gt;EBS( elastic block store)&lt;/strong&gt; - provides persistent block-level storage volumes that can be attached to EC2 instances. It offers durable and reliable storage that persists independently of the instance's lifespan.&lt;/p&gt;

&lt;p&gt;Let us dive more into EBS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Elastic Block Store (EBS) provides storage volumes for use with EC2 instances. These volumes act like raw, unformatted block devices that can be mounted on your instances. They persist independently from the life of the instance. You can create a file system or use them like a hard drive. EBS volumes offer fast access to data and long-term persistence, making them great for file systems, databases, and applications that need frequent updates and direct access to block-level storage. They work well for both random reads and writes and continuous read and write operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Features of EBS
&lt;/h2&gt;

&lt;p&gt;a). EBS provides &lt;em&gt;General Purpose SSD&lt;/em&gt;, &lt;em&gt;Provisioned IOPS SSD&lt;/em&gt;, &lt;em&gt;Throughput Optimized HDD&lt;/em&gt;, and &lt;em&gt;Cold HDD&lt;/em&gt; as volume types.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;General Purpose SSD volumes (gp2 and gp3)&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ideal for transactional workloads.&lt;/li&gt;
&lt;li&gt;Balance price and performance.&lt;/li&gt;
&lt;li&gt;Suitable for boot volumes, medium-size single instance databases, and development/test environments.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;em&gt;Provisioned IOPS SSD volumes (io1 and io2)&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designed for I/O-intensive workloads.&lt;/li&gt;
&lt;li&gt;Offer consistent and predictable IOPS rate.&lt;/li&gt;
&lt;li&gt;Scale to tens of thousands of IOPS per instance.&lt;/li&gt;
&lt;li&gt;io2 volumes provide highest volume durability.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;em&gt;Throughput Optimized HDD volumes (st1)&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low-cost magnetic storage.&lt;/li&gt;
&lt;li&gt;Performance measured in terms of throughput.&lt;/li&gt;
&lt;li&gt;Suitable for large, sequential workloads like Amazon EMR, ETL, data warehouses, and log processing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;em&gt;Cold HDD volumes (sc1)&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low-cost magnetic storage.&lt;/li&gt;
&lt;li&gt;Performance measured in terms of throughput.&lt;/li&gt;
&lt;li&gt;Ideal for large, sequential, infrequently accessed data.&lt;/li&gt;
&lt;li&gt;Cost-effective solution for storing cold data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;b). EBS Volume Encryption - This allows one to encrypt their EBS volumes to meet data-at-rest encryption requirements for regulated or audited data. It also ensures that the stored data in the volume. disk I/O and snapshot is encrypted.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encryption happens on the servers hosting EC2 instance which secures data both at rest and during transit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;c). EBS Snapshots allows you to create point-in-time copies of your EBS volumes which are stored in S3. The snapshot provide long-term durability and serve as a starting point for new EBS volumes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The same snapshot can be used to create multiple volumes as needed and the snapshots can be copied across different AWS Regions. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;d). Performance Metrics such as &lt;em&gt;bandwidth&lt;/em&gt;,  &lt;em&gt;latency&lt;/em&gt;, and &lt;em&gt;average queue length&lt;/em&gt; are provided by Amazon Cloudwatch and it allows you to monitor the performance of your EBS volume.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring helps ensure that you are allocating sufficient performance resources for your applications without paying for unnecessary resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;e). When you create an EBS volume, it is linked to a specific Availability Zone (AZ). This means that the volume is located in a particular data center within a specific geographic region. However, if you want to use the volume in a different AZ within the same AWS Region, you can create a snapshot.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A snapshot is a copy of your EBS volume's data and settings. It captures all the information stored on the volume at a specific point in time. By creating a snapshot, you essentially create a backup of your volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once you have a snapshot, you can use it to restore the volume to a new EBS volume in any AZ within the same AWS Region. This means you can move your volume's data from one AZ to another without losing any information. The restored volume will have the same data and settings as the original volume at the time the snapshot was taken.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating EBS Volume
&lt;/h3&gt;

&lt;p&gt;AWS allows us to create a volume from either of the following three methods:&lt;/p&gt;

&lt;p&gt;i) Create and attach EBS volumes while creating an EC2 instance using the Launch Instance wizard.&lt;br&gt;
ii) Create an empty EBS volume, and later you can attach it to a running instance.&lt;br&gt;
iii) Create an EBS volume from a previously created snapshot, and later you can attach it to a running instance.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;We are going to use option 2 to create EBS&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEPS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; On the &lt;strong&gt;EC2 Dashboard&lt;/strong&gt; select the &lt;strong&gt;Elastic Block Store → Volumes&lt;/strong&gt; service on the left navigation pane.&lt;/li&gt;
&lt;li&gt; Select the &lt;strong&gt;Create Volume&lt;/strong&gt; button as on the screenshot below:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D6wzrwqf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dkq31mumcnd4i4v10wp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D6wzrwqf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dkq31mumcnd4i4v10wp4.png" alt="dashboard" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specify the volume details on the set-up wizard page. &lt;/li&gt;
&lt;li&gt;You will have to specify the following details:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;i)Volume type - AWS offers various types of volumes, as described in the table below.&lt;br&gt;
   ii)Size (GB) - Mention the size with-in limits of the type you have chosen above.&lt;br&gt;
   iii) Availability Zone - It has a default value, or you can choose your preferred AZ.&lt;br&gt;
   iv)Snapshot ID - Specify the ID of the snapshot if you wish to create a volume from an existing snapshot. Remember, a snapshot is the saved state of another volume at a particular moment.&lt;br&gt;
   v) Tag - Specify the key-value pair, such as {Name: U-test}&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qp6wDQCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn38jw3dqzg3py9ig0ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qp6wDQCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn38jw3dqzg3py9ig0ak.png" alt="details" width="800" height="788"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the Create Volume and your EBS will be created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EBS Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the EC2 Dashboard select the &lt;strong&gt;Elastic Block Store → Volumes&lt;/strong&gt; service in the left navigation pane.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h6XzkY80--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pnqld0x17i7gy744u1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h6XzkY80--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pnqld0x17i7gy744u1j.png" alt="ebsdashboard" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List of all Volumes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the image above it is labeled as 1, here you can view all the volumes available under your account in a specific region. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is ID, size, type, I/O per second, snapshot ID of that volume, date of creation, availability zone, current status, whether the volume is encrypted and the EC2 instance to which it is attached.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can do several operations after creating a volume which includes:

&lt;ul&gt;
&lt;li&gt;Attach a volume to one or more EC2 instance(s)&lt;/li&gt;
&lt;li&gt;Detach a volume from an instance&lt;/li&gt;
&lt;li&gt;Replace a volume&lt;/li&gt;
&lt;li&gt;View the volume details, and monitor the current status&lt;/li&gt;
&lt;li&gt;Delete a volume&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Details of Selected Volume&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the second part of the EBS Dashboard, Select the checkbox against the name of any volume.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Details&lt;/em&gt; - you can view the specific information, such as volume ID, snapshot ID, size, date of creation, instance to which it is attached, type of volume, and much more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Status Check&lt;/em&gt; - Here you can view the health status of the selected volume. There are four possible status: Okay, Warning, Impairedor insufficient-data. See more details about the status &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-status.html"&gt;here&lt;/a&gt;. You can view IO status, pre-defined IO performance, dates, and further textual description (selective).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CdrADqsT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l640zu51d6me14jgh0zr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CdrADqsT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l640zu51d6me14jgh0zr.png" alt="status" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring - Here you can view the I/O performance metrics for the selected volume, such as:

&lt;ul&gt;
&lt;li&gt;read/write bandwidth (kB/sec),&lt;/li&gt;
&lt;li&gt;read/write throughput (Operations/sec),&lt;/li&gt;
&lt;li&gt;average queue (Operations)&lt;/li&gt;
&lt;li&gt;Idle time (%)&lt;/li&gt;
&lt;li&gt;avg read/write size (kB/Operation)&lt;/li&gt;
&lt;li&gt;avg read/write latency (msec/Operation)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JJR8cnGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/banxcdoz4gga56vrf4ym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JJR8cnGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/banxcdoz4gga56vrf4ym.png" alt="monitoring" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Tags&lt;/em&gt; - Here, you can have a look at the associated tag. In the snapshot above, it shows the Name tag with &lt;em&gt;U-test&lt;/em&gt; value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R7vsrlSD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/687qb87z2cjsvgj47tnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R7vsrlSD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/687qb87z2cjsvgj47tnp.png" alt="tags" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Elastic Block Store (EBS) provides versatile storage for Amazon EC2 instances. With features like volume encryption, snapshots, and performance monitoring, EBS offers secure data-at-rest, reliable backups, and efficient resource allocation. It allows flexibility by associating volumes with Availability Zones (AZs) and restoring snapshots to different AZs within the same AWS Region for workload distribution, migration, and disaster recovery. EBS is essential for scalable and reliable architectures on AWS.&lt;/li&gt;
&lt;li&gt;You can read more on EBS &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Site Reliability Engineering (SRE) and DevOps: A Comparative Study for Beginners</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Mon, 05 Jun 2023 09:55:09 +0000</pubDate>
      <link>https://dev.to/jeptoo/site-reliability-engineering-sre-and-devops-a-comparative-study-for-beginners-35pd</link>
      <guid>https://dev.to/jeptoo/site-reliability-engineering-sre-and-devops-a-comparative-study-for-beginners-35pd</guid>
      <description>&lt;p&gt;I am pretty sure you have heard of DevOps and SRE in your technological journey if you are a beginner it can be very confusing. Both SRE and DevOps share a goal of bridging development and operations. &lt;/p&gt;

&lt;p&gt;It is hard to say one is better than the other since they are both similar yet different in some ways. To simplify this, let's look at some key points.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SRE is viewed as a specific implementation of DevOps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Thy both share the same foundational principles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They both aim to deliver reliable software.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DevOps determines &lt;strong&gt;what&lt;/strong&gt; needs to be done, whereas SRE determinesDevOps determines what needs to be done, whereas SRE determines how it will be done. DevOps captures a vision of a system that is developed efficiently and reliably. SRE builds processes and values that result in this system. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can establish your goals using DevOps principles, and then implement SRE to achieve them. it will be done. DevOps captures a vision of a system that is developed efficiently and reliably. SRE builds processes and values that result in this system. You can establish your goals using DevOps principles, and then implement SRE to achieve them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  TABLE OF CONTENT
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Methods and Practices&lt;/li&gt;
&lt;li&gt;Team Structure and Roles&lt;/li&gt;
&lt;li&gt;Tools&lt;/li&gt;
&lt;li&gt;How SRE connects to DevOps&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is DevOps?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;DevOps reflects two parts &lt;strong&gt;Dev&lt;/strong&gt;elopement and &lt;strong&gt;Op&lt;/strong&gt;erations which originated from the need for faster software delivery and more streamlined collaboration. This promotes shared responsibilities, collaboration and automation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The main goal of DevOps is to reduce the time between making a change in code and that change reaching customers without having an impact on reliability. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DevOps has its main focus on on collaboration, integration and automation of system services  to enable faster and more efficient software delivery. It helps stream line software development lifecycle,encompassing development, testing, deployment and operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is SRE?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As we had mentioned earlier Site Reliability Engineering is an implementation of DevOps where it's goal is to align engineering goals with customer satisfaction. SRE originated from google where it was developed to maintain the reliability and scalability of large scale systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SRE introduced practices like error budgets and defined service level objects(SLOs) to align the goals of engineering and operations team.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SRE's focuses on the reliability, availability and performance of systems and services with emphasis on monitoring, engineering practices and response to high reliability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Methods and Practices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  DevOps methods and Practices
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Practices in DevOps are based on continuous, incremental improvements achieved by automation.The methodology focuses on the following elements:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Continuous Integration and Continuous Delivery(CI/CD)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One goal that DevOps aims to achieve is to deliver updates and applications to customers rapidly and frequently, CI/CD pipelines connect processes and practices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DevOps automates updating and code release to production. CI/CD means continuous monitoring and deployment to ensure  that code is consistent in deployment environments and also in the software versions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as code&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In order for IT infrastructure to be managed using software engineering techniques and provisioned automatically, DevOps places a strong emphasis on its abstraction.This ensures the system can efficiently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor infrastructure configurations.&lt;/li&gt;
&lt;li&gt;Track changes.&lt;/li&gt;
&lt;li&gt;Roll back changes with unintended effects.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Automated Testing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After being written or changed, code is automatically and continually tested. The continuous process speeds up deployment by removing the delays brought on by pre-release testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SRE methods and Practices
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SRE routine includes analysis of logs, incidence response, testing production environments, patch management etc. Let's break it down:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Service Level Objectives (SLOs) and Service Level Indicators (SLIs)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reliability is crucial for building customer trust and satisfaction and SRE allows the measure of how satisfied a customer is by using SLIs so we can say that SLIs are measurements used to quantify the performance and reliability of a service. It helps assess the user experience such as response time, error rates and availability.&lt;br&gt;
A well established SLIs the team gains insights into the overall health of the system and use then define SLOs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SLOs are targets set for key performance indicators(KPIs) which measure the reliability and performance of a service. They are set based on user expectations and also business requirements, by monitoring and measuring the actual performance against SLOs there is ease in identification of issues and drive continous improvement. In short SLOs sets a limit for how much unreliability the customer will tolerate for that SLI.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Error Budgeting&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This is basically the acceptable level of unreliability or downtime of a system. The SRE team establishes a measure to determine when to prioritize stability or new feature development. We can say that error budget is the room you have before your SLO is breached&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Error budget helps in decisons about prioritization, take an example services with lots of remaining error budget can accelerate development. When the error budget depletes, the team knows it's time to focus on reliability. This allows operations to influence development in a way that reflects customer needs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Incident Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By responding to incidents faster there is a reduction in customer impact. To achieve this there are components that need to be in place this includes:

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Runbooks&lt;/em&gt;: These are documents that guide responders through a particular task. They include things to check for, steps to take for each possibility which are always straightforward to reduce toil. Automating it is also a plus.&lt;/li&gt;
&lt;li&gt; &lt;em&gt;On-call systems and Alerting&lt;/em&gt;:  This determines the people available to respond to incidents as needed.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Incident classification&lt;/em&gt;: sorts incidents into categories based on severity and area affected this allows you to triage incidents and alert the right people.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Incident retrospectives&lt;/em&gt;: Learn a lot from each incident and review the documentations to determine follow-up tasks or revise runbooks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Team Structure and Roles
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Team Structure&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SRE teams consist of software engineers with a focus on reliability engineering. They work closely with development and operations teams to balance reliability and feature development. SREs often have expertise in coding, systems, and operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DevOps encourages cross-functional teams that include developers, operations engineers, and sometimes QA engineers. This fosters collaboration and shared responsibilities, blurring the lines between traditional roles.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Roles&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;DevOps Engineer&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connecting micro services and tools to smooth the development cycle.&lt;/li&gt;
&lt;li&gt;Sharing operation needs with development&lt;/li&gt;
&lt;li&gt;Introducing new tools and processes.&lt;/li&gt;
&lt;li&gt;Assessing risk to deployment targets.&lt;/li&gt;
&lt;li&gt;Aligning teams on development goals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Site Reliability Engineer&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developing, configuring, and deploying software to be used by operations teams&lt;/li&gt;
&lt;li&gt;Handling support escalation issues&lt;/li&gt;
&lt;li&gt;Conducting and reporting on incident reviews&lt;/li&gt;
&lt;li&gt;Developing system documentation&lt;/li&gt;
&lt;li&gt;Change management&lt;/li&gt;
&lt;li&gt;Determining and validating new features and updates&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tools
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;SRE Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the SRE role, the most widely used tools are Prometheus and Grafana for collecting and visualizing the different metrics (CPU usage, memory, disk space, etc.), incident alert tools (OP5, PageDuty, xMatters, etc.), Ansible, Puppet, or Chef, Kubernetes and Docker for container orchestration, cloud platform AWS, GCP, Azure, JIRA, SVN, GitHub.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DevOps Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the DevOps role, the most widely used tools are – Integrated Development Environment (IDEs) for development purposes, Jenkins for Continuous Integration and Development, JIRA for change management, Splunk for log monitoring, SVN, GitHub.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How SRE connects to DevOps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An organization can implement both DevOps and SRE and this can be achieved by considering SRE as a way of achieving DevOps goals. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SRE as an implementation of DevOps
&lt;/h3&gt;

&lt;p&gt;Here are some of the practical approaches that SRE uses to solve DevOps goals:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove Silos&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;DevOps works to ensure that different departments/software teams are not isolated from each other, ensuring they all work towards a common goal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SRE achieves this by creating documentation that the entire organization can use and learn from. Lessons from incidents are fed back into development practices through incident retrospectives. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Implementing Change gradually&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;DevOps embraces slow, gradual change to enable constant improvements. SRE supports this by allowing teams to perform small, frequent updates that reduce the impact of changes on application availability and stability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SRE teams use CI/CD tools to perform change management and continuous testing to ensure the successful deployment of code alterations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Accepting failure as normal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;While DevOps aims to handle runtime errors and allow teams to learn from them, SRE enforces error management through Service Level Commitments (SLx) to ensure all failures are handled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SRE strategically uses error budgets,  accelerate development while maintaining reliability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Leveraging tools &amp;amp; automation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Both DevOps and SRE use automation to improve workflows and service delivery. SRE enables teams to use the same tools and services through flexible application programming interfaces (APIs). While DevOps promotes the adoption of automation tools, SRE ensures every team member can access the updated automation tools and technologies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whenever you automate or simplify a process, you reduce toil and increase consistency. You also accelerate the process, achieving DevOps goals.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Metric-based decisions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SRE practices encourage monitoring everything and then constructing deep metrics. These will give you the insights you need to make smart decisions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DevOps gathers metrics through a feedback loop. On the other hand, SRE enforces measurement by providing SLIs, SLOs, and SLAs to perform measurements. Since Ops are software-defined, SRE monitors toil and reliability, ensuring consistent service delivery.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SRE and DevOps are two sides of the same coin, with SRE tooling and techniques complementing DevOps philosophies and practices. SRE involves the application of software engineering principles to automate and enhance ITOps functions while DevOps model enables the rapid delivery of software products through collaboration between development and operations teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The goal of both the methodologies is to enhance the end-to end cycle of an IT ecosystem—the application lifecycle through DevOps and operations lifecycle management through SRE.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>sitereliabilityengineering</category>
      <category>beginners</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Git as a DevOps Tool</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Fri, 26 May 2023 22:10:48 +0000</pubDate>
      <link>https://dev.to/jeptoo/git-under-the-hoodgit-as-a-devops-tool-8lh</link>
      <guid>https://dev.to/jeptoo/git-under-the-hoodgit-as-a-devops-tool-8lh</guid>
      <description>&lt;p&gt;I know when we say Git what comes to the mind  of most beginners is Github. Well just to make it clear, Github or bitbucket are built on top of git with some additional functionalities that help with hosting and version control of code in the remote Git repository.&lt;/p&gt;

&lt;p&gt;Git provides developers with shared workspace hence visualization of works is made easy. It comes with good integrations to work with CI/CD tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Git is needed in DevOps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Effective CI/CD Discussions with Developers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a DevOps engineer you design and develop CI/CD pipelines and Git plays a vital role in having productive discussions about Continuous Integration and Continuous Deployment (CI/CD) with developers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Git is essential in this context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It enables seamless &lt;strong&gt;collaboration&lt;/strong&gt; among developers, facilitating discussions on merging changes, resolving conflicts, and integrating code during the CI/CD pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git's branching feature allows developers to work on separate branches for different features or bug fixes*&lt;em&gt;(git branching)&lt;/em&gt;&lt;em&gt;. This supports discussions on branch readiness, **code review,&lt;/em&gt;* and ensuring quality code integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git stores &lt;strong&gt;deployment configurations&lt;/strong&gt;, such as infrastructure as code files and deployment scripts. Discussions can involve reviewing and modifying these configurations to ensure smooth deployments in the CI/CD pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git helps &lt;strong&gt;manage release&lt;/strong&gt; versions and tags, enabling discussions on release planning, versioning strategies, and feature inclusion. This ensures a coordinated approach to releases within the CI/CD workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Development and maintenance of infrastructure code is done in Git and the it is important to note that infra code is treated the same as application code hence it goes through unit testing and integration tests before deployment to environments. This means that infra code needs CI/CD pipeline which translates to git-based workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With Git, you can manage changes to your infrastructure code in a controlled and auditable manner. Each change is tracked, allowing you to understand who made the change, when it was made, and why. This helps maintain a clear history of modifications and simplifies change management processes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git facilitates collaboration among team members working on infrastructure code. Multiple developers can work on the same codebase simultaneously, enabling seamless collaboration, code review, and the ability to merge changes efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Gitops&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitOps uses Git as the one point of truth to control the deployment of infrastructure and applications. Git plays a crucial part in GitOps by offering features for version control and collaboration&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;GitOps uses a pull-based methodology, in which the system's ideal state is specified in Git and changes are constantly compared to the real state. As the only source of truth, Git repositories, the GitOps tools extracts the desired state from Git and applies it to the target. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitOps uses the branching capabilities of Git to handle several environments. In the Git repository, each environment (such as development, staging, and production) is allowed to have a separate branch. As a result, distinct workflows, change isolation, and controlled promotion of settings and applications between environments are made possible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git's pull request and code review functions are advantageous to GitOps. A review procedure can be applied to changes to deployment manifests and infrastructure code to guarantee best practices compliance and code quality. Reviewers can comment on modifications, debate them, and make sure that only those that have been authorized are merged into the main branch and released.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GIT ROADMAP
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.atlassian.com/git/tutorials/what-is-version-control"&gt;Understand Version Control System &lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://about.gitlab.com/topics/version-control/benefits-distributed-version-control-system/"&gt;Understand Distributed Version Control&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://git-scm.com/downloads"&gt;Git Installation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="http://git-scm.com/downloads/guis"&gt;GUI Clients&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://git-scm.com/videos"&gt;Git basics&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://gitimmersion.com/lab_01.html"&gt;Fundamental of Git&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.atlassian.com/git/tutorials/advanced-overview"&gt;Advanced Git&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Learning Resources
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.educative.io/courses/guide-to-git-and-version-control?aff=KNLz"&gt;Git and Version Control Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devopscube.com/recommends/git-basics-2/"&gt;Udacity Version Control with Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devopscube.com/recommends/git-basics/"&gt;Udemy step by step guide to Git&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Smooth Sailing with Docker: A Beginner's Guide to Containerization</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Sun, 14 May 2023 09:31:52 +0000</pubDate>
      <link>https://dev.to/jeptoo/everything-docker-4eni</link>
      <guid>https://dev.to/jeptoo/everything-docker-4eni</guid>
      <description>&lt;p&gt;Welcome to Docker world where containers bring magic to the world of software development and deployment!&lt;/p&gt;

&lt;p&gt;Why did the Docker container never ask for help?&lt;br&gt;
Because it couldn't find a container support group – it was too self-contained!&lt;/p&gt;

&lt;p&gt;Now that we've shared a giggle or a laugh let's embark on Docker exploration. In this article you'll learn docker fundamentals and it's installation...let's dive in!&lt;/p&gt;
&lt;h2&gt;
  
  
  TABLES OF CONTENTS
&lt;/h2&gt;

&lt;p&gt;Introduction&lt;br&gt;
 Docker Features&lt;br&gt;
 Docker Architecture&lt;br&gt;
 Installing Docker Desktop&lt;br&gt;
 Conclusion&lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker is a container management server that is used in development, packaging and deployment of applications automatically.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container&lt;/strong&gt; - is an instance of an image that allows developers package the application with all parts needed such as libraries and other dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image&lt;/strong&gt; - is a file that has multiple layers used to execute code in a docker container. They are a set of instructions used to create docker containers&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  How Docker works
&lt;/h3&gt;

&lt;p&gt;Docker uses containerization where applications are packaged into containers that has everything they need to run(code, libraries, dependencies). Docker uses OS-level virtualization to create the containers and ensuring that each container operates as an isolated and self-contained unit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Engine&lt;/strong&gt; - is a client-server based application that hosts containers.&lt;br&gt;
It has three main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Server(Docker deamon): Creates, and manages docker images, containers, network and volumes on docker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;REST API(Docker Client): Allows users to interact with server, issuing commands to build, run, and manage containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Client: is a docker command-line interface(CLI), that allows interaction with docker using docker commands.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Docker Features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;br&gt;
Docker containers are lightweight hence easily scalable. Its portability makes it simple to  manage workloads, scaling up or down apps and services as demands in real-time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Swarm&lt;/strong&gt;&lt;br&gt;
It is a clustering and scheduling tool for docker containers. Swarm uses docker API as its front-end hence various tool to control it. It also helps us control cluster of docker hosts as a single virtual host. It's a self-organizing group of engines used to enable pluggable backends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
Docker saves secret into the swarm itself. Docker container provide a high level of isolation between different application preventing them fro interacting with or affecting each other hence a more secure and stable platform for running multiple apps on a single host.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Routing Mesh&lt;/strong&gt;&lt;br&gt;
It enables connection even if there is no task running on the node. It routes the incoming requests for published ports on available nodes to an active container. &lt;/p&gt;
&lt;h2&gt;
  
  
  Docker Architecture
&lt;/h2&gt;

&lt;p&gt;In a nutshell, the client talks with the docker deamon which helps in building, running and distributing docker containers. The client runs with the deamon on the same system or connected remotely. With the help of REST API over a network the client and deamon interacts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yPDWPMPk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywtl0f2lu6qjobquovjs.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yPDWPMPk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywtl0f2lu6qjobquovjs.jpeg" alt="docker" width="750" height="937"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;DOCKER CLIENT&lt;/strong&gt;&lt;br&gt;
Docker client uses commands and REST API to communicate with the server(Docker Deamon).&lt;br&gt;
When a client runs any docker client terminal, the client terminal sends the docker commands  to the docker deamon which receives the commands in form of  command and REST API's request&lt;br&gt;
Docker Client can communicate with more than one docker deamon and it uses CLI to run the &lt;code&gt;docker build&lt;/code&gt;, &lt;code&gt;docker pull&lt;/code&gt;, &lt;code&gt;docker run&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DOCKER HOST&lt;/strong&gt;&lt;br&gt;
Provides an environment to execute and run applications. It contains docker deamon, containers, images, network and storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DOCKER REGISTRY&lt;/strong&gt;&lt;br&gt;
Manages and stores docker images.&lt;br&gt;
It is of two types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Public Registry&lt;/em&gt; - Also called Docker Hub is used by everyone.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Private Registry&lt;/em&gt; - uses to share images within the enterprise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DOCKER OBJECTS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XUxlyRFH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9y6jpd2cllpm0a56fgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XUxlyRFH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9y6jpd2cllpm0a56fgi.png" alt="OBJ" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Image&lt;/strong&gt; - are the read-only binary templates used to create Docker Containers. Images enables collaboration between developers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Containers&lt;/strong&gt; - is used to hold the entire package that is needed to run the application. containers need less resources which is a plus. Containers is a copy of a template while the image is a template.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Networking&lt;/strong&gt; - provides isolation for docker containers, it links docker container to many networks.&lt;br&gt;
&lt;strong&gt;Types of Docker Network&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Bridge&lt;/em&gt; - default network driver for the container, used when when multiple docker communicates with the same docker host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Host&lt;/em&gt; - used when we don't need network isolation between container and host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;None&lt;/em&gt; - disables all the networking.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Overlay&lt;/em&gt; - enables containers to run on different docker host. Offers Swarm services to communicate with each other.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Macvlan&lt;/em&gt; - Used when we want to assign MAC (Media Access Control)addresses to the container.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker Storage&lt;/strong&gt;&lt;br&gt;
It is used to store data in containers. Docker has the following storage options;-&lt;br&gt;
&lt;em&gt;Data Volume&lt;/em&gt; - provides the ability to create persistence storage, it allows us to name volumes, list volumes and containers associated with the volume.&lt;br&gt;
&lt;em&gt;Directory Mounts&lt;/em&gt; - it mounts host's directory with a container, it is the best option.&lt;br&gt;
&lt;em&gt;Storage Plugins&lt;/em&gt; - it provides ability to connect to external storage platforms.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installing Docker Desktop
&lt;/h2&gt;

&lt;p&gt;We can install docker on any operating system but docker runs natively on Linux distributions. We will install docker engine for linux Ubuntu.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
To Install docker Desktop ensure that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Have a 64-bit version of either Ubuntu Jammy Jellyfish 22.04 (LTS) or Ubuntu Impish Indri 21.10. Docker Desktop is supported on x86_64 (or amd64) architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Meet &lt;a href="https://docs.docker.com/desktop/install/linux-install/#system-requirements"&gt;the system requirements&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up &lt;a href="https://dev.tourl"&gt;Docker's package repository&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Download latest &lt;a href="https://desktop.docker.com/linux/main/amd64/docker-desktop-4.19.0-amd64.deb?utm_source=docker&amp;amp;utm_medium=webreferral&amp;amp;utm_campaign=docs-driven-download-linux-amd64"&gt;DEB package&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Install the package with apt:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`sudo apt-get update`
`sudo apt-get install ./docker-desktop-&amp;lt;version&amp;gt;-&amp;lt;arch&amp;gt;.deb`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Launch Docker Desktop using the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`systemctl --user start docker-desktop`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or search &lt;em&gt;Docker Desktop&lt;/em&gt; on &lt;em&gt;Applications&lt;/em&gt; menu and open it.&lt;br&gt;
Check docker binary versions by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`docker compose version` 
`docker --version`
`docker version`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To login to docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`systemctl --user enable docker-desktop`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To stop Docker Desktop&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl --user stop docker-desktop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If you are using Windows you can install it from &lt;a href="https://docs.docker.com/desktop/install/windows-install/#:~:text=Double%2Dclick%20Docker%20Desktop%20Installer,bottom%20of%20your%20web%20browser."&gt;here&lt;/a&gt;  or if you are using Mac you can find docker installation &lt;a href="https://docs.docker.com/desktop/install/mac-install/"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Remember, Docker is a powerful tool with numerous advanced features and use cases. Continue learning by referring to the official Docker documentation and community resources. Embrace the world of containerization and elevate your software development and deployment workflows with Docker. Happy containerizing!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>beginners</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>My First Month in the SCA Cloud School: A journey of Learning and growth.</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Mon, 01 May 2023 21:23:48 +0000</pubDate>
      <link>https://dev.to/jeptoo/my-first-month-in-the-sca-cloud-school-a-journey-of-learning-and-growth-1a5g</link>
      <guid>https://dev.to/jeptoo/my-first-month-in-the-sca-cloud-school-a-journey-of-learning-and-growth-1a5g</guid>
      <description>&lt;p&gt;I've always been fascinated by the field of SRE and the crucial part it plays in maintaining the availability and smooth operation of digital services. Despite the fact that I had no prior expertise in SRE, I had worked in software engineering for a while and had witnessed firsthand the value of having a solid infrastructure.&lt;/p&gt;

&lt;p&gt;I came across the &lt;a href="https://shecodeafrica.org/"&gt;She Codes Africa&lt;/a&gt; SRE boot-camp sponsored by &lt;a href="https://deimos.io/"&gt;Deimos&lt;/a&gt; which was an ideal chance to expand my knowledge of SRE tenets and best practices when I learned about it. The program's focus on practical knowledge and abilities, as well as its reputation for producing excellent SREs, attracted my attention in particular.&lt;/p&gt;

&lt;p&gt;The program began on 31st of March and it runs for two months. There are evaluations and projects that are to be covered during this period.I will break down what I have learnt during the first four weeks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WEEK ONE&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We were introduced to cloud computing and AWS fundamentals, including the different cloud deployment models and AWS services such as EC2,S3 and IAM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We learned how to set up an AWS account, create an EC2 instance and configure it with appropriate security settings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We further explored on security best practices for AWS resources such as access control policy and monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I had an opportunity to practice my skills by creating an EC2 instance and writing and article about it, you can check it out &lt;a href="https://dev.to/jeptoo/how-to-create-ec2-instance-ubuntu-2204-on-aws-and-connect-via-ssh-using-pem-492o"&gt;here&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of week one I had a solid understanding of cloud computing and AWS fundamentaals and I had gained practical experience with creating and securing an EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WEEK TWO&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We learned about Azure and its concepts including the architectural components of Azure and how to create an Azure account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We explored further into Azure core services which included Azure compute services, storage services,Azure analytics, Azure databases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We delved into core solutions and management on Azure, where we learnt about IoT services, AI services, serverless technology and management and configuration on Azure environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We finally learned about monitoring services for Azure and how to use them to keep your Azure resources secure and performant.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of the second week I had a full understanding of Azure's services and how to manage them effectively. I then wrote an article on &lt;a href="https://dev.to/jeptoo/building-your-first-static-web-app-on-azure-a-step-by-step-guide-35d3"&gt;how to create an Azure App Service to host the web application&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WEEK THREE&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Learned about the general security and network security features on Azure, including how to protect against threats and secure network connectivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We explored Identity, governance, privacy, and compliance features on Azure, which are essential for maintaining the security and compliance of your Azure resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We also learned about Azure cost management and service level agreements (SLAs), which are critical for managing your Azure resources effectively and ensuring that you are getting the best value for your money.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, we deployed a virtual machine on the Azure portal, which gave me hands-on experience with creating and managing Azure resources. I also wrote an article on &lt;a href="https://dev.to/jeptoo/how-to-create-a-linux-vm-and-install-mysql-server-using-cloudshell-4lhb"&gt;How to Deploy a virtual machine on Azure portal&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of week three I gained clarity on how to secure Azure resources and manage their cost effectively. I practiced creating and managing a virtual machine on Azure portal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WEEK FOUR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We were introduces to SQL and learned about its fundamental concepts and how it is used in the context of Azure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learned about deployment and configuration of servers, instances, and databases for Azure SQL, which involved creating and configuring SQL instances and databases on the Azure platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We practiced connecting user-created instances to SQL Server Management Studio (SSMS), which is a crucial tool for managing and querying SQL databases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We learned how to backup and restore databases using SSMS, which is essential for ensuring data integrity and recoverability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We explored on how to secure data using Azure SQL, including data encryption and access control measures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, you learned how to deliver consistent performance with Azure SQL, which is critical for ensuring that your SQL databases are fast and reliable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of week four I had solid understanding on Azure SQL and how it is used in the context of Azure. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CONCLUSION&lt;/strong&gt;&lt;br&gt;
The first four weeks of the She Codes Africa SRE boot-camp have been a fantastic experience for me. I have learned a lot about cloud computing, Azure, general security, network security, SQL, and database management, and I have gained practical experience with creating and managing cloud resources on Azure. I am now looking forward to the next phase of the boot-camp, where I will deepen my knowledge of cloud technologies and gain more practical experience with using them.&lt;/p&gt;

&lt;p&gt;One of the things that have impressed me the most about this boot-camp is the amazing community of learners and facilitators that I have had the privilege of interacting with. The community is incredibly supportive and always ready to help each other out, which has made the learning process both enjoyable and productive. The facilitators are also knowledgeable and experienced, and they have provided excellent guidance and feedback throughout the boot-camp.&lt;/p&gt;

&lt;p&gt;As a woman in tech, I would like to recommend She Code Africa to other women who are interested in cloud computing and want to gain practical experience with using cloud technologies. SCA is a fantastic organization that is committed to increasing the number of women in tech and providing them with the support and resources they need to succeed. I am grateful for the opportunity to be part of this boot-camp, and I am confident that the skills and knowledge that I am gaining will be invaluable in my future career as an SRE.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to create a Linux VM and Install MySQL Server using Cloudshell</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Thu, 27 Apr 2023 16:27:51 +0000</pubDate>
      <link>https://dev.to/jeptoo/how-to-create-a-linux-vm-and-install-mysql-server-using-cloudshell-4lhb</link>
      <guid>https://dev.to/jeptoo/how-to-create-a-linux-vm-and-install-mysql-server-using-cloudshell-4lhb</guid>
      <description>&lt;p&gt;Data management is critical to every organization, and MySQL database has proven to be a reliable tool for storing and retrieving information. Pairing MySQL with Azure VMs provides a robust, flexible environment for running your workloads. In this article, we'll explore the process of setting up VMs and using MySQL on Azure VMs. &lt;/p&gt;

&lt;p&gt;The features provided by Azure Virtual Machine include:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create, start, stop, restart or terminate Virtual Machine instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementation of Load Balancing and Auto Scaling for multiple Virtual Machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides additional storage to Virtual Machine instances &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Manages Network Connectivity to Virtual Machine.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why MySQL?&lt;/strong&gt; - It is a relational database that stores and manages data. It is known for its speed, reliability and ease of use.&lt;br&gt;
MySQL is a good fit for running on Azure Virtual Machine because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Flexibility&lt;/em&gt;: it allows you to customize the VM to your specific requirements(choosing CPU, memory config, storage size).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Compatibility&lt;/em&gt;: it is compatible with a range of programming languages hence ease of integration in the environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Scalability&lt;/em&gt;: it can handle large data volumes thus easy to take advantage of automatic scaling, load balancing in Azure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Security&lt;/em&gt;: it have great security features like SSL encryption, user authentication and access control running it on Azure adds Azure Security features advantages.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  TABLE OF CONTENT
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create Virtual Machine&lt;/li&gt;
&lt;li&gt;Connect to Virtual Machine&lt;/li&gt;
&lt;li&gt;Install MySQL&lt;/li&gt;
&lt;li&gt;Connect to MySQL&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create Virtual Machine
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Search for Virtual Machine on the search bar and click on it, you should also see it under Azure Services.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MtxYrwP3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5qqbb6gtdolj2sbcfaml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MtxYrwP3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5qqbb6gtdolj2sbcfaml.png" alt="vm service" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once you click on it, You will see a create button that has a drop down, select on Azure Virtual Machine.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CUJrWuQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqmldwykrfqb9n3ayfv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CUJrWuQH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqmldwykrfqb9n3ayfv8.png" alt="create" width="605" height="500"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the Basics, Provide a &lt;em&gt;resource group&lt;/em&gt; in which you VM will belong to, you can create one as well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give your VM  a unique &lt;em&gt;name&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select a &lt;em&gt;region&lt;/em&gt; you'd want to deploy you VM&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We'll go with the standard security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose an Operating System you'd want under &lt;em&gt;image&lt;/em&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3VlyxfjC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lb58cbykr8qydtce4os.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3VlyxfjC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lb58cbykr8qydtce4os.png" alt="config1" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;em&gt;size&lt;/em&gt; is how big we want our VM to be, be sure to select your desired size.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the default &lt;em&gt;authentication type&lt;/em&gt; (SSH Public Key)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can use the default &lt;em&gt;username&lt;/em&gt; or provide a new one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;em&gt;SSH public key source&lt;/em&gt; we are going to generate a new key pair.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be sure to provide a &lt;em&gt;Key Pair Name&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once all is set click on &lt;strong&gt;Review and Create&lt;/strong&gt; and when the validation is passed click on &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3tqHrtMg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4agtrggpks122e50be9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3tqHrtMg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4agtrggpks122e50be9n.png" alt="config2" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There will be a &lt;em&gt;Generate new key pair&lt;/em&gt; pop up where you will download the key pair that we will use later on in connecting to the VM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q7uU0Pzy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i2kve1smnhjmdkzvnvlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q7uU0Pzy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i2kve1smnhjmdkzvnvlz.png" alt="Key Pair" width="577" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once the deployment is complete click on &lt;strong&gt;Go to Resource&lt;/strong&gt; so as to see the newly created VM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---um21OZX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d0wuchiwup85ejn1ezv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---um21OZX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d0wuchiwup85ejn1ezv4.png" alt="deploy" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to Virtual Machine
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;We are going to connect to the VM using Azure Cloud Shell which is used for managing Azure resources from anywhere with an inernet connection&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the top of VM page, click on the &lt;strong&gt;connect&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At the top right click on the first icon to open the cloudshell.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vkE78oIW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wylg01w352madnj859bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vkE78oIW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wylg01w352madnj859bm.png" alt="Connect" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once you click connect, a new page will be opened that has the SSH guide to connect to the VM.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w5_3sJb_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ar4jp719qnqx2m8ej1aq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w5_3sJb_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ar4jp719qnqx2m8ej1aq.png" alt="SSH" width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upon clicking the cloud shell icon, at the bottom of the page you'll see a section that requires a storage mounting. Click on the &lt;em&gt;create&lt;/em&gt; button which will open the shell terminal successfully.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--riitpnD8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q16i877600v4wq1myp5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--riitpnD8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q16i877600v4wq1myp5e.png" alt="mount storage" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We need to upload our private key(we downloaded earlier) to the cloud shell.&lt;br&gt;
Click the upload icon then upload the file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After it has uploaded you will see the notification at the bottom right of the page.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GBRlprg---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29e9jtzqpyzlamu2eh7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GBRlprg---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29e9jtzqpyzlamu2eh7p.png" alt="upload" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To confirm the file has been uploaded run &lt;code&gt;ls&lt;/code&gt; and you will be sure to see your file name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To ensure we have read-only access to private key run &lt;code&gt;chmod 400 &amp;lt;keyname&amp;gt;.pem&lt;/code&gt;.&lt;br&gt;
Replace &lt;em&gt;.pem&lt;/em&gt; with your actual file name.&lt;br&gt;
Confirm you have read-only access running &lt;code&gt;ls -ltrh&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a path to the private key. Run &lt;code&gt;pwd&lt;/code&gt; to get path of the SSH private key then run &lt;code&gt;ssh -i &amp;lt;private key path&amp;gt;/&amp;lt;filename&amp;gt;.pem azureuser@52.191.61.164&lt;/code&gt; &lt;br&gt;
Replace private key path with the path and also the private key file name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that you want to continue with the connection by clicking yes and you will soon be connected!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can confirm you are connected to the VM by running &lt;code&gt;hostname&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install MySQL
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo apt-get install mysql-server&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mysql -V&lt;/code&gt; to confirm installation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Connect to MySQL
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;We are going to connect to the root user account since we have not created any user account&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sudo mysql&lt;/code&gt; too start MySQL client with admin privileges.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create a new User
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Replace &lt;strong&gt;newuser&lt;/strong&gt; with desired username  for the new user and &lt;strong&gt;password&lt;/strong&gt; with a unique password.&lt;br&gt;
&lt;code&gt;@ localhost&lt;/code&gt; specifies that the user account can only be used to connect to MySQL server from the local machine.(it is a recommended approach in azure).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grant the new user necessary privileges to the databases and tables on the server &lt;code&gt;GRANT ALL PRIVILEGES ON * . * TO 'newuser'@'localhost';&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can now connect to MySQL server using the new user account created by running: &lt;code&gt;mysql -u newuser -p -h hostname&lt;/code&gt; .&lt;br&gt;
Replace the &lt;strong&gt;username&lt;/strong&gt; with the username of the MySQL user account created earlier, &lt;strong&gt;hostname&lt;/strong&gt; with the hostname/IP address of the machine running MySQL server&lt;br&gt;
It will prompt you to enter the password for the new user account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once connected you can create a new database &lt;code&gt;CREATE DATABASE dbname;&lt;/code&gt;&lt;br&gt;
Replace &lt;strong&gt;dbname&lt;/strong&gt; with your new database name.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;N/B&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Remember to stop your Virtual Machine from running to avoid incurring extra costs!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Congratulations!! You have just create your own Virtual Machine and connect MySQL server to it!&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>azure</category>
      <category>virtualmachine</category>
    </item>
    <item>
      <title>How to Build HTTP API using AWS Serverless Application Model (SAM), AWS Lambda and Node.js</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Mon, 24 Apr 2023 08:30:56 +0000</pubDate>
      <link>https://dev.to/jeptoo/how-to-build-http-api-using-aws-serverless-application-model-sam-aws-lambda-and-nodejs-1e8e</link>
      <guid>https://dev.to/jeptoo/how-to-build-http-api-using-aws-serverless-application-model-sam-aws-lambda-and-nodejs-1e8e</guid>
      <description>&lt;p&gt;&lt;strong&gt;AWS Serverless Application Model(SAM)&lt;/strong&gt; provides short hand syntax to specify serverless resources such as lambda functions, API Gateway and dynamodb. The syntax can be used to model the application you want create in AWS using yaml.&lt;br&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt; is a serverless, event-driven service that runs your code on demand.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this article we are going to &lt;strong&gt;develop&lt;/strong&gt; a HTTP API locally with AWS SAM, &lt;strong&gt;deploy&lt;/strong&gt; it to AWS and &lt;strong&gt;test&lt;/strong&gt; the deployed API.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create Lambda Function&lt;/li&gt;
&lt;li&gt;API Gateway&lt;/li&gt;
&lt;li&gt;HTTP API AWS SAM on VS Code&lt;/li&gt;
&lt;li&gt;Deploying to AWS&lt;/li&gt;
&lt;li&gt;Testing&lt;/li&gt;
&lt;li&gt;CloudWatch Logs&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create Lambda Function.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Search for Lambda on the search bar, on the left side bar click on &lt;strong&gt;Functions&lt;/strong&gt; then on Create Function.&lt;/li&gt;
&lt;li&gt;Provide the needed details i.e Function name, runtime and architecture, then &lt;strong&gt;create  function&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Use image for referral&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1je6lrpgnhh3kobc5d1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1je6lrpgnhh3kobc5d1d.png" alt="create lambda"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  API Gateway
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This is a fully managed service that makes it easy for developers to create, deploy, and manage APIs at any scale.&lt;/li&gt;
&lt;li&gt;On the page click on &lt;em&gt;API&lt;/em&gt; so as to pick on the API you want to create.(We are building HTTP API)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Use image for reference&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4pqtuoe5mfaw3vj1130.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4pqtuoe5mfaw3vj1130.png" alt="build http"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once you click on &lt;em&gt;build&lt;/em&gt;:-&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt; - Add an Integration which will be Lambda, select the AWS and select the lambda function you created earlier. Provide an API name.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w8rqq69axk051c7t62f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w8rqq69axk051c7t62f.png" alt="Intergration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt; - We shall &lt;strong&gt;Configure routes&lt;/strong&gt; select a &lt;strong&gt;GET&lt;/strong&gt; method and provide a resource path and the Intergration target is the lambda function we created.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F327hup1ogb15u4z4v57d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F327hup1ogb15u4z4v57d.png" alt="config"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt; - Stages is the deployment environment where our API is going to be hosted. leave it to &lt;strong&gt;default&lt;/strong&gt; and ensure that &lt;strong&gt;auto-deploy&lt;/strong&gt; is activated, it comes in handy.&lt;br&gt;
&lt;strong&gt;Step 4&lt;/strong&gt; - Review everything you have done and click on &lt;strong&gt;create&lt;/strong&gt; if you are satisfied with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTTP API AWS SAM on VS Code
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Ensure you have an AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install &lt;strong&gt;AWS CLI&lt;/strong&gt; using this &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure your AWS using the &lt;code&gt;aws configure&lt;/code&gt; where you'll be needed to provide your AWS Key details.&lt;br&gt;
In case you want to generate new ones use this &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/prerequisites.html#prerequisites-configure-credentials" rel="noopener noreferrer"&gt;link&lt;/a&gt; to generate new ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the &lt;strong&gt;AWS SAM CLI&lt;/strong&gt; using this &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Docker &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;here&lt;/a&gt; (Local Testing)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Nodejs &lt;a href="https://nodejs.org/en" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install AWS Toolkit VS Code Extension.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Thunder Client CLI extension on VS Code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Create SAM project
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a folder and open it in VS Code. &lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate a default sam application using &lt;code&gt;sam init&lt;/code&gt;. Pick on &lt;em&gt;AWS Quick Start application template&lt;/em&gt;.  &lt;em&gt;Hello world Example&lt;/em&gt;, &lt;em&gt;nodejs14.x&lt;/em&gt; runtime and package type, &lt;em&gt;zip&lt;/em&gt; package,  &lt;em&gt;Hello World Example&lt;/em&gt; starter template, &lt;em&gt;N&lt;/em&gt; to x-ray tracing to avoid extra charges finally give your sam project a name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A folder with the sam project name will be generated, let's discuss it in details:-&lt;br&gt;
&lt;code&gt;events/&lt;/code&gt;: contains sample events that can be used for local testing of Lambda functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;hello-world/&lt;/code&gt;: contains an example AWS SAM app that is used to test that everything is working correctly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;.gitignore&lt;/code&gt;: used by git to determine which files and directories to ignore when committing changes to a repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;README.md&lt;/code&gt;: a markdown file that contains documentation for the project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;samconfig.toml&lt;/code&gt;: a configuration file used by the SAM CLI to specify options and settings for deploying the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;template.yaml&lt;/code&gt;: defines the resources and configuration for the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Template.yaml Code Explanation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Check out the full code in &lt;code&gt;template.yaml&lt;/code&gt; &lt;a href="https://github.com/IvyJeptoo/AWS-SAM-Nodejs/blob/master/template.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.
Under the &lt;strong&gt;Resources&lt;/strong&gt; section, &lt;code&gt;HttpApi&lt;/code&gt; defines the resources that are going to be created on AWS.
We've kept our default HelloWorldFunction that has a type of serverless function
There are three &lt;strong&gt;events&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;GetUsers&lt;/strong&gt;: Gets all the users from the API.
Has type &lt;em&gt;HttpApi&lt;/em&gt;,  path of &lt;em&gt;/user&lt;/em&gt;, method used is &lt;em&gt;get&lt;/em&gt;, ApiId with a ref of &lt;em&gt;HttpApi&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;ApiId Ref(HttpApi) is defined above the HelloWorldFunction and has a type of serverless HttpApi with a stage name of nonprod(default)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GetUser&lt;/strong&gt;:&lt;br&gt;
Gets a single user by id from the the API.&lt;br&gt;
Has type &lt;em&gt;HttpApi&lt;/em&gt;,  path of &lt;em&gt;/user/{id}&lt;/em&gt;, method used is &lt;em&gt;get&lt;/em&gt;, ApiId with a ref of &lt;em&gt;HttpApi&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PostUser&lt;/strong&gt;&lt;br&gt;
Creates a new user.&lt;br&gt;
Has type &lt;em&gt;HttpApi&lt;/em&gt;,  path of &lt;em&gt;/user&lt;/em&gt;, method used is &lt;em&gt;post&lt;/em&gt;, ApiId with a ref of &lt;em&gt;HttpApi&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  app.js Code Explanation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Check out the full code in &lt;code&gt;app.js&lt;/code&gt; &lt;a href="https://github.com/IvyJeptoo/AWS-SAM-Nodejs/blob/master/hello-world/app.js" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;This file defines the lambda handler i.e the code that will run when our lambda is invoked.
&lt;code&gt;Response&lt;/code&gt; is the object we get back from the lambda(API Response)
&lt;code&gt;const USERS&lt;/code&gt; is an array of objects that contain 
the users it has a property of name and userid.
&lt;code&gt;lambdaHandler&lt;/code&gt; has event and context passed in. In the event is where the http request will be injected 
&lt;code&gt;console.log&lt;/code&gt; the event to get the http in cloudwatch logs.
&lt;code&gt;result&lt;/code&gt; will be passed in the response object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;First condition&lt;/strong&gt; uses the event object passed in earlier and the routeKey(has the method used) 
Query parameters can be passed in for start and end(default for start is 0 and total array length for the end.)
&lt;strong&gt;Validation&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;first validation&lt;/em&gt; is to ensure the start is greater or equal to 0.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;second validation&lt;/em&gt; is to ensure the end is less or equal to the array length.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;third validation&lt;/em&gt; ensures the start is less than end.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Second Condition&lt;/strong&gt; we have the POST method and /user path where we will post new users to the user array&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third Condition&lt;/strong&gt; returns a single user using the method get and path of /user/{id} with a personalized greeting.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploying to AWS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ensure you are in the directory that contains &lt;code&gt;template.yaml&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;sam build&lt;/code&gt; command to build the Lambda function and generate a deployment package.&lt;br&gt;
aws-sam folder will be generated successfully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;N/B&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Verify that your IAM user has the necessary permissions to perform the deployment. The required permissions are:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AWSLambdaFullAccess&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;IAMFullAccess&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AmazonAPIGatewayAdministrator&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AmazonS3FullAccess&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CloudFormationFullAccess&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudWatchLogsFullAccess&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;sam deploy --guided&lt;/code&gt; for step by step creation of CloudFormation stack using the SAM template and Lambda function code. &lt;br&gt;
Provide a name for the app, use the default region, confirm the changes,allow IAM role creation, don't disable rollback, we do not have authorization so agree to it, we want our arguments saved in the configuration file and use default for the rest.&lt;br&gt;
Confirm that we want to deploy the changes(it should be successful).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On your AWS account under &lt;strong&gt;API Gateway&lt;/strong&gt; you will see the just created HttpApi once you refresh it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;HttpAPi overview&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3wg1m87wyqwao45nc12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3wg1m87wyqwao45nc12.png" alt="http"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;routes&lt;/strong&gt; section we can also see the paths we created as so:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xuy6aga8ozg17d01e23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xuy6aga8ozg17d01e23.png" alt="Routes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All the routes have a configuration integration(same Lambda) as so:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6xz47s4db1j1iomcamf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6xz47s4db1j1iomcamf.png" alt="config"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the stages copy the Invoke URL which we will use it for testing.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdcvrb1lupmwz035mfp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdcvrb1lupmwz035mfp2.png" alt="url"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We will use &lt;strong&gt;Thunder Client CLI&lt;/strong&gt; extension for testing our API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;GetUsers&lt;/em&gt;&lt;br&gt;
We are doing a get request and the response will be all the users in our API.&lt;br&gt;
To test the query string you can add the specifications at the end &lt;code&gt;/user?start=1&amp;amp;end=3&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpurvvq0zlqz3bees521.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpurvvq0zlqz3bees521.png" alt="users"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GetUser&lt;/em&gt;&lt;br&gt;
This is also a get request with id specification the response is a personalized greeting with the user's name.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw4q5n9t95ltxteodflb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw4q5n9t95ltxteodflb.png" alt="user"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;postUser&lt;/em&gt;&lt;br&gt;
Using the post methos it adds a new user to the user array. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pby6p5o7rcwz16utmpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pby6p5o7rcwz16utmpa.png" alt="post user"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch Logs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Earlier we printed the events to the logs in CloudWatch.&lt;/li&gt;
&lt;li&gt;Once in cloudwatch go to log groups then find the correct log group then click on it.&lt;/li&gt;
&lt;li&gt;You will see an overview of it then at the bottom there is log streams which has all the calls made.&lt;/li&gt;
&lt;li&gt;The log has START, END and a REPORT like so:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjgnmtwd41p177x0lsfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjgnmtwd41p177x0lsfp.png" alt="LOGS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS Serverless Application Model (SAM), AWS Lambda and Node.js  make building APIs a breeze. With the steps outlined in this article, you can get started with your own simple API endpoint in no time. If you found this article helpful, be sure to connect with me and leave your comments and feedback. Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>cloudcomputing</category>
      <category>node</category>
    </item>
    <item>
      <title>How to attach a data disk to a Linux VM</title>
      <dc:creator>Ivy Jeptoo</dc:creator>
      <pubDate>Fri, 14 Apr 2023 11:42:19 +0000</pubDate>
      <link>https://dev.to/jeptoo/how-to-attach-a-data-disk-to-a-linux-vm-foi</link>
      <guid>https://dev.to/jeptoo/how-to-attach-a-data-disk-to-a-linux-vm-foi</guid>
      <description>&lt;p&gt;Hello there,&lt;br&gt;
We learnt how to create a virtual machine(you can recap &lt;a href="https://dev.to/jeptoo/how-to-create-ec2-instance-ubuntu-2204-on-aws-and-connect-via-ssh-using-pem-492o"&gt;here&lt;/a&gt;) and today we are going to cover how to create a data disk and attaching it to the virtual machine.&lt;/p&gt;
&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;p&gt;Introduction&lt;/p&gt;

&lt;p&gt;Attach Disk&lt;/p&gt;

&lt;p&gt;Find Disk&lt;/p&gt;

&lt;p&gt;Prepare new disk&lt;/p&gt;

&lt;p&gt;Mount Disk&lt;/p&gt;

&lt;p&gt;Verify Disk. &lt;/p&gt;
&lt;h2&gt;
  
  
  Introduction.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Data disk are separate storage units attached to VMs to enhance the storage capacity and performance without affecting operating system or application. &lt;/li&gt;
&lt;li&gt;&lt;p&gt;We will explore the step-by-step process of using the Azure portal to create and attach a data disk to a Linux VM, and then mount it so that it can be used for storing data or running applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select VM from the Azure portal and select one that you'll use or create a new one if need be.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Use image for reference&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31s44t3f2oic4hvlg4ba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31s44t3f2oic4hvlg4ba.png" alt="vm"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Attach Disk.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;In the Virtual Machine page, Under &lt;strong&gt;Setting&lt;/strong&gt;s choose &lt;strong&gt;Disk&lt;/strong&gt; options&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Attaching a new disk.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Create and attach a new disk&lt;/strong&gt; under the Data disks pane&lt;/li&gt;
&lt;li&gt;Give your managed disk a name and configure the  default setting.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;save&lt;/strong&gt; at the top of page to save your new disk and update the VM configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Use image for reference&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy89zmtmefly55fmz5nri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy89zmtmefly55fmz5nri.png" alt="new disk"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Attaching an existing disk.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Attach existing disks&lt;/strong&gt; under the Data disk.&lt;/li&gt;
&lt;li&gt;From the drop down choose the desired disk you would want to work with.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;save&lt;/strong&gt; at the top of page to save your new disk and update the VM configuration.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Find Disk
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Make sure you are connected to your Virtual Machine&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Follow the steps from the image below to connect to your Virtual Machine.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjepam2klxpmvlnbimp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjepam2klxpmvlnbimp2.png" alt="connecting"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You are now connected to the VM and we need to find the just created disk. Run the following command on the terminal&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;N/B&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
 &lt;em&gt;&lt;strong&gt;&lt;code&gt;lsblk&lt;/code&gt;&lt;/strong&gt; - lists information about all available block devices, including disks and partitions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;&lt;code&gt;-o NAME,HCTL,SIZE,MOUNTPOINT&lt;/code&gt;&lt;/strong&gt; - specifies the columns of information to display in the output.We are asking &lt;code&gt;lsblk&lt;/code&gt; to display the device name, host controller target and logical unit number (HCTL), size, and mount point (if any) for each block device.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;&lt;code&gt;|&lt;/code&gt;&lt;/strong&gt; - a pipe symbol that redirects the output of the &lt;code&gt;lsblk&lt;/code&gt; command to the next command in the pipeline.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;&lt;code&gt;grep -i "sd"&lt;/code&gt;&lt;/strong&gt; - searches for lines in the output of &lt;code&gt;lsblk&lt;/code&gt;that contain the characters "sd" (case-insensitive).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;&lt;code&gt;"sd"&lt;/code&gt;&lt;/strong&gt; is typically used to indicate SCSI disks in Linux.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The expected output should be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sda     0:0:0:0      30G 
├─sda1             29.9G /
├─sda14               4M 
└─sda15             106M /boot/efi
sdb     0:0:0:1       4G 
└─sdb1                4G /mnt
sdc     1:0:0:0       4G 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prepare new disk
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If you are using an existing disk that contains data, &lt;strong&gt;skip to mounting the disk&lt;/strong&gt;. The following instructions will delete data on the disk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are using a new disk you need to partition it because:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You can **organize data **into logical units, making it easier to manage and locate specific files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It allows you to &lt;strong&gt;isolate&lt;/strong&gt; certain data sets from others so if one partition is compromised, the other partitions remain safe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve performance by allowing you to separate frequently accessed files from those that are rarely accessed. Hence &lt;strong&gt;optimize the disk usage&lt;/strong&gt; and reduce the time it takes to access data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;-To partition run this command, make sure to replace &lt;code&gt;sdc&lt;/code&gt; with the correct option for your disk&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdc
sudo partprobe /dev/sdc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mount disk
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Once the file system is created, the disk needs to be mounted to a specific directory in the file system hierarchy to be accessible to users and applications.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a directory to mount the file system using &lt;code&gt;mkdir&lt;/code&gt;. The following example creates a directory at &lt;code&gt;/datadrive:&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo mkdir /datadrive&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mount the /dev/sdc partition to the /datadrive mount point:&lt;br&gt;
&lt;code&gt;sudo mount /dev/sdc /datadrive&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We need to find the UUID of the newly attached drive:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;sudo blkid&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The expected output:&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40umjj3pe7j6o6a215s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40umjj3pe7j6o6a215s.png" alt="UUID"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Verify disk
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;use &lt;code&gt;lsblk&lt;/code&gt; command again to see the disk and the mountpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The expected output:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngmcy8o28rus2q7n4hgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngmcy8o28rus2q7n4hgb.png" alt="output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-Well done!! You can see that sdc is now mounted at /datadrive.&lt;/p&gt;

&lt;p&gt;These steps are crucial for ensuring a smooth and efficient computing experience, and taking the time to properly prepare and mount a new disk can save time and effort in the long run.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
