<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Souleymane. Tiendrebeogo</title>
    <description>The latest articles on DEV Community by Souleymane. Tiendrebeogo (@asksouley).</description>
    <link>https://dev.to/asksouley</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/asksouley"/>
    <language>en</language>
    <item>
      <title>Install and configure AWS cli on Windows machine</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Tue, 26 Sep 2023 05:58:08 +0000</pubDate>
      <link>https://dev.to/asksouley/install-and-configure-aws-cli-on-windows-machine-1k64</link>
      <guid>https://dev.to/asksouley/install-and-configure-aws-cli-on-windows-machine-1k64</guid>
      <description>&lt;p&gt;There are 2 ways of installing aws cli on your Windows machine .&lt;/p&gt;

&lt;h2&gt;
  
  
  1-The long and boring way
&lt;/h2&gt;

&lt;p&gt;Go to AWS official page , download the .msi file and the follow the instructions (click, click, click)  just like you would normally install a  normal software on your Windows machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  2- Quick and easy through the cli
&lt;/h2&gt;

&lt;p&gt;Make sure that you have chocolatey installed . Click &lt;a href="https://dev.to/asksouley/install-chocolatey-on-your-window-machine-4eac"&gt;here&lt;/a&gt; to learn how to install it&lt;br&gt;
     Launch PowerShell and run it as administrator and run the command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; choco install awscli 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that you can check your cli version with the command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; aws --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu0fcbe2qiq5t9gcx2n3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu0fcbe2qiq5t9gcx2n3.JPG" alt=" " width="526" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure your AWS CLI to connect to your AWS console
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;    Log into  your aws account &lt;/li&gt;
&lt;li&gt;    Head to  AIM &lt;/li&gt;
&lt;li&gt;    Head to users&lt;/li&gt;
&lt;li&gt;    Select a user &lt;/li&gt;
&lt;li&gt;    Head to security credential tab and create and access key
. Make sure to keep them safe &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Now lets configure the cli
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter your Access key ID and your Secret access key  that  you previously downloaded &lt;br&gt;
        AWS Access Key ID     [*************************&lt;strong&gt;&lt;em&gt;]&lt;br&gt;
        AWS Secret Access Key [&lt;/em&gt;&lt;/strong&gt;*************************]&lt;br&gt;
        Default region name [ us-west-2]&lt;br&gt;
        Default output format [json]:json&lt;br&gt;
 Now you are good to go . To make sure lets check &lt;br&gt;
      aws iam list-users&lt;/p&gt;

&lt;p&gt;If we get the users information as show in our aws console  , that means the configuration is successful and now we can interact with our aws account from  the cli.&lt;/p&gt;

&lt;p&gt;Cheers!&lt;br&gt;
 Follow me on twiter  &lt;a href="https://twitter.com/asksouley" rel="noopener noreferrer"&gt;@asksouley &lt;/a&gt;&lt;br&gt;
 Check my website : &lt;a href="http://www.asksouley.com" rel="noopener noreferrer"&gt;www.asksouley.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cryptocurrency</category>
      <category>bitcoin</category>
      <category>discuss</category>
    </item>
    <item>
      <title>AWS Storage : Core Storage Portfolio. Part 1</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Tue, 26 Sep 2023 05:56:16 +0000</pubDate>
      <link>https://dev.to/asksouley/aws-storage-core-storage-portfolio-part-1-2gin</link>
      <guid>https://dev.to/asksouley/aws-storage-core-storage-portfolio-part-1-2gin</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2usti5i949xv5b4xpfr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2usti5i949xv5b4xpfr3.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AWS storage lineup comprises fundamental storage services offered by Amazon Web Services. This array of AWS storage solutions empowers businesses to swiftly acquire storage capacity without the need to invest significant time, finances, or resources in purchasing costly storage hardware that could become outdated within a few years.&lt;/p&gt;

&lt;p&gt;AWS Storage Services encompass three primary storage categories:&lt;/p&gt;

&lt;p&gt;Block Storage&lt;br&gt;
File Storage&lt;br&gt;
Object Storage&lt;/p&gt;

&lt;p&gt;In the following paragraphs, I will provide simplified explanations for these three main storage types, which constitute the core storage services within AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Block Storage: (EBS)
&lt;/h2&gt;

&lt;p&gt;Much like DAS (Direct Attached Storage), Amazon's cloud-based storage, known as Elastic Block Storage (EBS), is well-suited for mission-critical applications and those that demand low-latency storage hosting. EBS, being a block storage solution, offers various classes within AWS:&lt;/p&gt;

&lt;p&gt;General Provisioned SSD: gp3 and gp2&lt;br&gt;
Provisioned IOPS SSD: Io1, Io2, and Io2 Block Express&lt;br&gt;
Throughput I-Optimized HDD: St1, Sc1&lt;br&gt;
Use Cases for EBS include:&lt;/p&gt;

&lt;p&gt;Running databases&lt;br&gt;
On-premises storage migration&lt;br&gt;
Scaling your big data engines&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F605ljotmni4ojnplz378.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F605ljotmni4ojnplz378.png" alt=" " width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  File Storage (EFS)
&lt;/h2&gt;

&lt;p&gt;Much like NAS (Network Attached Storage), Amazon's Elastic File System (EFS) serves as Amazon's solution for file storage and is particularly well-suited for data that needs to be shared. AWS offers various types of EFS storage:&lt;/p&gt;

&lt;p&gt;Amazon FSx for Lustre&lt;br&gt;
Amazon FSx for Windows File Server&lt;br&gt;
Amazon FSx for NetApp ONTAP&lt;br&gt;
Amazon FSx for OpenZFS&lt;br&gt;
Use Cases for EFS include:&lt;/p&gt;

&lt;p&gt;Streamlining DevOps processes&lt;br&gt;
Modernizing applications&lt;br&gt;
Improving content management systems&lt;br&gt;
Supporting data science initiatives&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feth4vh4zx534yyo1h65p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feth4vh4zx534yyo1h65p.png" alt=" " width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Object Storage (S3)
&lt;/h2&gt;

&lt;p&gt;Amazon's Simple Storage Service (S3) serves as Amazon's offering in the realm of object storage. Renowned for its cost-effectiveness, S3 proves to be the ideal destination for storing items such as snapshots or backups. The various tiers of AWS S3 encompass:&lt;/p&gt;

&lt;p&gt;S3 Standard&lt;br&gt;
S3 Standard-IA (Infrequent Access)&lt;br&gt;
S3 Intelligent-Tiering&lt;br&gt;
S3 One Zone-IA&lt;br&gt;
Additionally, AWS Glacier provides options for:&lt;/p&gt;

&lt;p&gt;Instant Retrieval&lt;br&gt;
Flexible Retrieval&lt;br&gt;
Deep Archive&lt;br&gt;
Use Cases for S3 and Glacier include:&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Lake
&lt;/h2&gt;

&lt;p&gt;S3 is the optimal storage choice for running big data analytics, machine learning, and high-performance computing. Learn more &lt;a href="https://aws.amazon.com/big-data/datalakes-and-analytics/datalakes/" rel="noopener noreferrer"&gt;here&lt;/a&gt;]&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup and Restore
&lt;/h2&gt;

&lt;p&gt;Ensure compliance with your Service Level Objectives (SLO) by meeting your Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO).  Learn more &lt;a href="https://aws.amazon.com/backup-restore/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Low-Cost Data Archival
&lt;/h2&gt;

&lt;p&gt;AWS's different tiers, including Glacier, enable companies to securely store their cold data at the lowest possible cost. Learn more &lt;a href="https://aws.amazon.com/s3/storage-classes/glacier/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud-Native Applications
&lt;/h2&gt;

&lt;p&gt;Expedite the development and launch of highly scalable cloud or web-based native applications. Learn more &lt;a href="https://aws.amazon.com/products/storage/object-storage-for-cloud-native-applications/?pg=ln&amp;amp;sec=uc" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqldgi3vhs7c7d9m5ibg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqldgi3vhs7c7d9m5ibg.png" alt=" " width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Next Post,  I will be diving deep into each of the aws core storage services.&lt;/p&gt;

&lt;p&gt;Lets link up on twitter: &lt;a class="mentioned-user" href="https://dev.to/asksouley"&gt;@asksouley&lt;/a&gt;&lt;br&gt;
 YouTube: @BlocksandObjects&lt;/p&gt;

</description>
      <category>aws</category>
      <category>storage</category>
      <category>cloudskills</category>
      <category>s3</category>
    </item>
    <item>
      <title>Cloud Migration Strategies: The 6 "R". Part 1/6</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Fri, 17 Mar 2023 00:28:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/cloud-migration-strategies-the-6-r-part-16-3pgm</link>
      <guid>https://dev.to/aws-builders/cloud-migration-strategies-the-6-r-part-16-3pgm</guid>
      <description>&lt;p&gt;With the boom on the field of cloud computing, there is more and more need for engineers capable of helping companies make their transition to the cloud smooth. &lt;br&gt;
Being able to assist an organization  to make safely make their transition to the cloud is a valuable skillset  to have. In this post, I will  go over the 6R of Cloud Migration Strategy.&lt;/p&gt;

&lt;p&gt;It is important to mention that  not all applications are optimized to run on the cloud. After careful  evaluation, if you think that your application is still better of on the cloud, take a time  to understand which cloud  Migration Strategy fits your case best.      &lt;/p&gt;

&lt;p&gt;In most block posts, you will see the 6R's staring with "Refactor" and end up with "Retire" However, in this post I deliberately decided to go from the migration strategy that requires the  &lt;strong&gt;Least&lt;/strong&gt; effort to &lt;strong&gt;Most&lt;/strong&gt; effort.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Retire&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retain&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Repurchase&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rehost&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Replaftorm&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refactor&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Retire&lt;/strong&gt;:  I will coin this as &lt;em&gt;"Drop and Go"&lt;/em&gt; In this option application that have reached end of life (EOL) are simply retired  or decommissioned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retain&lt;/strong&gt;: Also known as &lt;em&gt;"Re-visit"&lt;/em&gt;&lt;br&gt;
In this case some  applications  are just excluded from the migration strategy to be revisited on a later time. The reason for that may be things like latency, compliance/regulatory constrains for simply costs. In this case you will be running a hybrid environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rehost&lt;/strong&gt; : Mainly referred as _Lift and Shift _. &lt;br&gt;
 This is known as one of the most basic way of cloud migration strategy . It consists of moving  your application or data without having to go through a code level changes. It is the preferred method for most organizations as it carries least amount of risks or complication. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repurchase&lt;/strong&gt;:   This is also known as &lt;em&gt;"Drop and Shop"&lt;/em&gt; &lt;br&gt;
In this case, on prem or legacy applications are just dropped in favor of cloud-native vendor software packages It is like moving from proprietary application to SAAS&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replaftorm&lt;/strong&gt;:  This also know as &lt;em&gt;Lift Tinker and Shift&lt;/em&gt; Method &lt;br&gt;
 In this method, candidate applications are packaged , tweaked a little bit and moved to the cloud via tools like AWS Data Migration Service &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactor&lt;/strong&gt; :   I will coin this as " Go from scratch "&lt;br&gt;
In this case,  an application is rewritten re-architectured from scratch to make cloud native. It is a an expensive and time consuming method  that require a team of qualify cloud engineers. Most companies goes that route if their existing applications are not cloud compatible.&lt;/p&gt;

&lt;p&gt;In the following series I will have a full blog post of each of the 6R mentioned here. &lt;/p&gt;

&lt;p&gt;Let me know if you find this post useful.&lt;/p&gt;

&lt;p&gt;Lets link up on twitter : &lt;a href="https://twitter.com/asksouley" rel="noopener noreferrer"&gt;@asksouley&lt;/a&gt; &lt;br&gt;
 or visit my site. &lt;a href="http://www.asksouley.com" rel="noopener noreferrer"&gt;www.asksouley.com&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>cloudskills</category>
      <category>aws</category>
      <category>cloudmigration</category>
      <category>devops</category>
    </item>
    <item>
      <title>Install Chocolatey on your Windows machine</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Fri, 03 Mar 2023 05:30:08 +0000</pubDate>
      <link>https://dev.to/asksouley/install-chocolatey-on-your-window-machine-4eac</link>
      <guid>https://dev.to/asksouley/install-chocolatey-on-your-window-machine-4eac</guid>
      <description>&lt;p&gt;"Chocolatey is a machine-level, command-line package manager and installer for software on Microsoft Windows. It uses the NuGet packaging infrastructure and Windows PowerShell to simplify the process of downloading and installing software"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chocolatey.org/install" rel="noopener noreferrer"&gt;https://chocolatey.org/install&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that Head to your Window search bar, select your cmd and launch it as administrator and then run the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Get-ExecutitionPolicy 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it return " Restricted" , then  follow up with the second command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Set-ExecytionPolicy AllSigned or  Set-ExecutionPolicy Bypass -Scope Process
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, are ready to install chocolatey by running the command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, run the command below to see chocolatey version&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;choco 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you will be able to see chocolatey version &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdh7bysn87oiwheyhkxn0.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdh7bysn87oiwheyhkxn0.JPG" alt=" " width="551" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Adding an EBS volume to a running AWS EC2 Instance</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Thu, 23 Feb 2023 05:34:18 +0000</pubDate>
      <link>https://dev.to/aws-builders/adding-an-ebs-volume-to-a-running-aws-ec2-instance-311l</link>
      <guid>https://dev.to/aws-builders/adding-an-ebs-volume-to-a-running-aws-ec2-instance-311l</guid>
      <description>&lt;p&gt;When creating an EC2 instance on AWS, you are given the option to select the size of your storage. However you may run into a scenario where you need more storage than what you have provisioned . In that case. you can either increase the size of the storage or attach an extra EBS volume to your instance . This short post is about showing how to add an extra volume to an EC2 instance.&lt;/p&gt;

&lt;p&gt;I assume that you already know how to create a EBS volume.&lt;/p&gt;

&lt;p&gt;Spin out  your instance and attach the new volume created by following the instructions below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the new volume&lt;/li&gt;
&lt;li&gt;   Under the "Action" tab select "attach volume"&lt;/li&gt;
&lt;li&gt;   Select the EC2 instance you created &lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under " Device name , keep  /dev/sdf &lt;br&gt;
(as best practice in linux, /dev/sda1 is for root volume and /dev/sd(f-p) for data volumes )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select "Attach volume"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfkj2a7b5rdv37ppim3r.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfkj2a7b5rdv37ppim3r.JPG" alt=" " width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After attaching  the volume, there are  few extra steps to do in order to make sure that the volume is  accessible.&lt;/p&gt;

&lt;p&gt;SSH into your EC2 and run the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; df -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you will only see the root volume and not the newly attached volume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedgvl0i3avlv82vbji78.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedgvl0i3avlv82vbji78.JPG" alt=" " width="549" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To "SEE" the newly mounted volume, run the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you will see the newly mounted volume along with its mount point. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26s84l7bxc7lo0ckoa4x.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26s84l7bxc7lo0ckoa4x.JPG" alt=" " width="567" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step will be to mount the new volume. Run the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo mkfs -t xfs /dev/xvdf
 sudo mkdir  /data 
 sudo mount /dev/xvdf /data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that when we run the "df -h" command , you we should be able to see the new volume along the with the root volume  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0fzppxtogwh1fdxn9zn.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0fzppxtogwh1fdxn9zn.JPG" alt=" " width="736" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope you like it.&lt;br&gt;
  Site Web :&lt;a href="http://www.asksouley.com" rel="noopener noreferrer"&gt;www.asksouley.com&lt;/a&gt;&lt;br&gt;
  Twitter   :&lt;a class="mentioned-user" href="https://dev.to/asksouley"&gt;@asksouley&lt;/a&gt; &lt;/p&gt;

</description>
      <category>aws</category>
      <category>storage</category>
      <category>cloudskills</category>
      <category>blockstorage</category>
    </item>
    <item>
      <title>What is Ansible?</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Wed, 15 Feb 2023 05:48:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/what-is-ansible-1bbf</link>
      <guid>https://dev.to/aws-builders/what-is-ansible-1bbf</guid>
      <description>&lt;p&gt;Ansible is an opensource IT automation tool that automates provisioning, configuration management, application deployment and orchestration. It was created by RedHat and then handled to the community via opensource.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use Ansible?
&lt;/h2&gt;

&lt;p&gt;Assume that you are a system Administrator for a company and and you in charge of maintaining the infrastructure consisting of few webserver database servers, you will be fine as, maintaining and updating few servers manually is an easy task.&lt;/p&gt;

&lt;p&gt;But in a situation where your company has hundreds of servers, then everything changes . To maintain and update hundreds of servers manually one by will not only be a dompting and inefficient use of the company's time.&lt;/p&gt;

&lt;p&gt;That is when a tool like Ansible comes into pay. Ansible will allow you configure and update your server exactly how you want it to be through some lines of YAML and that is referred as an Infrastructure as Code (IaC).&lt;/p&gt;

&lt;h2&gt;
  
  
  How Ansible Work?
&lt;/h2&gt;

&lt;p&gt;Traditionally, in order to manage servers remotely, you needed to have an agent installed thus the concept of Master/salve .The only way the master can get information from the salves were through the agents installed on the target nodes.&lt;/p&gt;

&lt;p&gt;That is called Pull Configuration and configuration management as done by tools like Chef and Puppet.&lt;br&gt;
But Ansible is different, and that is what makes it so attractive to lot of Sysadmins.&lt;/p&gt;

&lt;p&gt;Ansible is Agentless. Unlike the Pull Configuration, there is no need to install an Agent on the target nodes. Ansible use what is called Push Configuration. Unlike Pull Configuration, you can Push information to your target node and force them to be configured as you wish.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up and Ansible
&lt;/h2&gt;

&lt;p&gt;Set up your main machine and install Ansible on it. In the next blog I will quickly show how to install Ansible. No need to have Ansible installed on your target machines.&lt;/p&gt;

&lt;p&gt;In order for your Ansible instance to push the desired configuring, the only the only requirement will be an ssh connection.&lt;/p&gt;

&lt;p&gt;The configuration that will be send ( Module ) will have a set of instructions called "Playbook" and will on your Ansible which as mentioned earlier can be a local machine, a VM, and or cloud instance like an AWS EC2.In addition to that , your Ansible node will have the list of the target machines in what is called "Inventory".&lt;/p&gt;
&lt;h2&gt;
  
  
  Playbooks
&lt;/h2&gt;

&lt;p&gt;This is where will be store the set of instruction used to configure the target nodes a and they are written in YAML&lt;br&gt;
Below is a basic playbook to install apache on an target server. In next blogs I will dive deeper into the structure of of an Ansible playbook by explaining it line by line. Oh and it you are a little confuse about the syntax, there are tools out there like yamlchecker that can help check the validity of your playbook&lt;/p&gt;

&lt;p&gt;Ansible is an open-source IT automation tool that simplifies provisioning, configuration management, application deployment, and orchestration.&lt;/p&gt;

&lt;p&gt;There are several reasons why you should use Ansible. As a system administrator responsible for maintaining a company's infrastructure with a few web and database servers, updating and maintaining them manually may be manageable. However, if your company has hundreds of servers, this becomes an arduous and inefficient task that consumes valuable company time. This is where Ansible comes in handy.&lt;/p&gt;

&lt;p&gt;Ansible allows you to configure and update your servers precisely the way you want them to be, using a few lines of YAML. This is known as Infrastructure as Code (IaC).&lt;/p&gt;

&lt;p&gt;Traditionally, to manage servers remotely, you needed to have an agent installed. This is known as Pull Configuration and is used by tools like Chef and Puppet for configuration management. However, Ansible is different and more attractive to sysadmins because it is agentless.&lt;/p&gt;

&lt;p&gt;With Ansible, you can push information to your target nodes and configure them as you desire. You only need to set up your main machine and install Ansible on it; no need to install Ansible on your target machines. The only requirement is an SSH connection for your Ansible instance to push the desired configuration.&lt;/p&gt;

&lt;p&gt;To configure target nodes, Ansible uses modules, which are sets of instructions called "playbooks." Playbooks are written in YAML and stored in the main machine, which can be a local machine, a VM, or a cloud instance such as AWS EC2. Your Ansible node will have a list of the target machines in "Inventory."&lt;/p&gt;

&lt;p&gt;To illustrate, here is a basic playbook that installs Apache on a target server. We will delve deeper into the structure of an Ansible playbook in future blogs. If you find the syntax confusing, there are tools such as "&lt;a href="https://yamlchecker.com/" rel="noopener noreferrer"&gt;yamlchecker&lt;/a&gt;" to validate your playbook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wxo55inwsudklvt4udc.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wxo55inwsudklvt4udc.JPG" alt=" " width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4vz6jszbl2r78nl9f1e.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4vz6jszbl2r78nl9f1e.JPG" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
  - name:  is https installed?
    hosts: all
    task:
      - name: Install apache
        apt:
            name: apache2 
            state: latest
        become: yes
      - name: index.html
        copy:
          content: "this is an ansible playbook"
          dest: /var/www/html/index.html
    become: yes
  - name: restart apache2
    service:
            name: apache2
            state: restarted
            enabled: yes
    become: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Inventory
&lt;/h2&gt;

&lt;p&gt;This is where we maintain the structure of the network environment. Below is a random example of an inventory file for webserver . It has the IP Addresses of the targeted servers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"webservers: {
      "hosts:[
            " 192.168.0.1
              "192.168.0.10
]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the next blog, I will show you how to spin up an AWS EC2 servers, install Ansible on the main server and deploy a basic webserver.&lt;/p&gt;

&lt;p&gt;Follow me if you like my content on twitter at &lt;a href="https://twitter.com/asksouley" rel="noopener noreferrer"&gt;@asouley&lt;/a&gt;&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
    <item>
      <title>Solved : centOS not getting IP address</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Sun, 05 Feb 2023 08:11:31 +0000</pubDate>
      <link>https://dev.to/asksouley/solved-centos-not-getting-ip-address-3822</link>
      <guid>https://dev.to/asksouley/solved-centos-not-getting-ip-address-3822</guid>
      <description>&lt;p&gt;This for those who have installed CentOS recently and either on VMware workstation  or Oracle's VirtualBox and having connectivity issues. In this short blog post wills show you how to quickly fix that.&lt;br&gt;
Make sure that your VM is running, and then head to settings , access the network tab and make sure the Bridge Adapter is selected under your main network Adapter as shown in the image below &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbozp4fd8mngolge79kf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbozp4fd8mngolge79kf.png" alt=" " width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ftznuuvwgohkp7rerlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ftznuuvwgohkp7rerlc.png" alt=" " width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open nmtui with the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nmtui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get a text base interface and from there , just proceed and activate the connection. Use the tab button to navigate to " Activate a connection"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gysr1vmh5nlozw3kto5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gysr1vmh5nlozw3kto5.png" alt=" " width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn8uxawek0pder4ebfrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn8uxawek0pder4ebfrg.png" alt=" " width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94xrxg0gbum3vu0zclu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94xrxg0gbum3vu0zclu4.png" alt=" " width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>centos</category>
      <category>redhat</category>
      <category>virtualmachines</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Storage : What Happened in 2022</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Sat, 31 Dec 2022 19:46:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-storage-what-happened-in-2022-1ip9</link>
      <guid>https://dev.to/aws-builders/aws-storage-what-happened-in-2022-1ip9</guid>
      <description>&lt;p&gt;Here in this post,  I will be sharing with you all the new features that the AWS storage Team has implemented  in 2022. Some of them were based on customers feedback while other were just part of AWS consistent strive for performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon EFS Elastic Throughput
&lt;/h2&gt;

&lt;p&gt;AWS EFS got even better with a new throughput option . On top of the existing throughput mode ( Bursting throughput and provisioned  Throughput) that we are already familiar with, now we have Elastic Throughput that is suitable for spiky or unpredictable workload and performance requirement that are tricky  to predict. or for applications that drive throughput at 5% or less of the peak throughput on average (the average-to-peak ratio). Elastic Throughput can drive up to 3 GiBps for read operations and 1 GiBps for write operations per file system, in all AWS Regions&lt;/p&gt;

&lt;h2&gt;
  
  
  Lower latency for EFS.
&lt;/h2&gt;

&lt;p&gt;Now EFS can deliver up to 60% lower Read Operation while working with frequently accessed data and metadata and also up to 40% lower Write Operation with working with small files (&amp;lt;64kb)&lt;/p&gt;

&lt;p&gt;For example, in region like N Virginia read latencies are as low as 0.25 milliseconds for frequently-accessed data, and write latencies are as low as 1.6 milliseconds for EFS-One Zone (and 2.7 milliseconds for EFS-Standard) &lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon File cache Now generally available.
&lt;/h2&gt;

&lt;p&gt;As fully managed, scalable, and high-speed cache  File cache allow  for processing file data stored in disparate locations—including on premises. It  Cache accelerates and simplifies cloud bursting and hybrid workflows including media and entertainment, financial services, health and life sciences, microprocessor design, manufacturing, weather forecasting, and energy. This allow companies running hybrid infrastructure to be fully efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Glacier retrieval time.
&lt;/h2&gt;

&lt;p&gt;Now the  restore throughput on amazon Glacier has been improved  by up to 10x when retrieving large volumes of archived data. &lt;br&gt;
This improvement allows your applications to initiate restore requests from S3 Glacier at a much faster rate, significantly reducing the restore completion time for datasets composed of small objects. In addition, with S3 Batch Operations, you can now automatically initiate requests at a faster rate, allowing you to restore billions of objects containing petabytes of data with just a few clicks in the S3 console, or with a single API request.&lt;br&gt;
 The retrieval performance benefit scales with the number of restored objects, and reduces data retrieval completion times by up to 90%&lt;br&gt;
 Now companies can save money on storage cost by utilizing cold storage without taking a hit of retrieval speed &lt;/p&gt;

&lt;h2&gt;
  
  
  S3 access point now support cross account access point.
&lt;/h2&gt;

&lt;p&gt;Amazon S3 Access Points simplify data access for any AWS service or customer application that stores data in S3 buckets. With S3 Access Points, you create unique access control policies for each access point to more easily control access to shared datasets. Now, bucket owners are able to authorize access via access points created in other accounts. In doing so, bucket owners always retain ultimate control over data access, but can delegate responsibility for more specific IAM-based access control decisions to the access point owner. This allows you to securely and easily share datasets with thousands of applications and users, and at no additional cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 Storage lens has now 34 new metrics.
&lt;/h2&gt;

&lt;p&gt;Amazon S3 Storage Lens is a cloud storage analytics feature that delivers organization-wide visibility into object storage usage and activity.  Now 34 additional metrics have been added to uncover deeper cost optimization opportunities, identify data protection best practices, and improve the performance of application workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS MSK tiered storage.
&lt;/h2&gt;

&lt;p&gt;Amazon Managed Streaming for Apache Kafka (MSK) now offers Tiered storage that brings a virtually unlimited and low-cost storage tier. Tiered Storage lets you store and process data using the same Kafka APIs and clients , while saving your storage costs by 50% or more over existing MSK storage options . &lt;br&gt;
Tiered Storage makes it easy and cost-effective when you need a longer safety buffer to handle unexpected processing delays or build new stream processing applications making it now possible to scale your compute and storage independently, simplifying operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Security Lake
&lt;/h2&gt;

&lt;p&gt;AWS announced the preview release of Amazon Security Lake, a purpose-built service that automatically centralizes an organization’s security data from cloud and on-premises sources into a purpose-built data lake stored in your account. &lt;br&gt;
Amazon Security Lake automates the central management of security data, normalizing from integrated AWS services and third-party services and managing the lifecycle of data with customizable retention and also automates storage tiering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi region access point fail over control
&lt;/h2&gt;

&lt;p&gt;Amazon S3 Multi-Region Access Points failover controls let you shift S3 data access request traffic routed through an Amazon S3 Multi-Region Access Point to an alternate AWS Region within minutes to test and build highly available applications. &lt;/p&gt;

&lt;p&gt;With S3 Multi-Region Access Points failover controls, you can operate S3 Multi-Region Access Points in an active-passive configuration where you can designate an active AWS Region to service all S3 requests and a passive AWS Region that will only be routed to when it is made active during a planned or unplanned failover.  This make it easy shift S3 data access request traffic from an active AWS Region to a passive AWS Region typically within 2 minutes to test application resiliency and perform disaster recovery simulations. &lt;/p&gt;

&lt;h2&gt;
  
  
  EBS RULE LOCK FOR RECYCLE BIN
&lt;/h2&gt;

&lt;p&gt;Now with  EBS   you can  set up a Rule Lock for Recycle Bin so customers can lock their Region-level retention rules to prevent them from being unintentionally modified or deleted. This new setting adds an additional layer of protection for customers to recover their EBS Snapshots and EC2 AMIs in case of inadvertent or malicious deletions.&lt;/p&gt;

&lt;p&gt;I will be diving deeper into each of these release next year . Until them , Happy New year.  Stay Safe&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>frontend</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Installing Ansible on AWS EC2 Instance</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Sun, 25 Dec 2022 22:05:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/installing-ansible-on-aws-ec2-instance-fj9</link>
      <guid>https://dev.to/aws-builders/installing-ansible-on-aws-ec2-instance-fj9</guid>
      <description>&lt;p&gt;Here in this blog post I will show you how to install Ansible on an AWS EC2 instance.&lt;/p&gt;

&lt;p&gt;Bellow are the pre requisites &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Have an AWS account&lt;/li&gt;
&lt;li&gt;Spin and EC2 instance and connect to it via ssh or directly from your aws console . &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the purpose of this post I created an aws ubuntu instance.&lt;/p&gt;

&lt;p&gt;After connecting to your instance run the commands bellow. This will make sure that your instance is up to get and that you got access to Ansible repos for install.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update 
sudo apt-add-repository -y ppa:ansible/ansible

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the steps above are done, now you may get started with the real deal by installing Ansible with the command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get -y install ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, Ansible will be installed on your instance . You can now check the version with the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get the version of ansible installed along with the path to the configuration file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvlzo810iw1dytaa3yro.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvlzo810iw1dytaa3yro.JPG" alt="asksouleyAnsibleVerion" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the main goal of using Ansible is to automate our resource provisioning process, we will need to have boto3 installed. To learn more about boto3 , check my other blog post here&lt;/p&gt;

&lt;p&gt;To install boto 3, just make sure that your instance is up to date one more time and running the command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install python3-boto3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just to make sure, you can also check that boto is installed by running the commands bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Method 1: pip show boto3
Method 2: pip list
Method 3: pip list boto | greb boto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me know if you like it .  I tried to make it as short as possible . Next will be how to  automate the creation of EC2 instance using Ansible. &lt;/p&gt;

&lt;p&gt;Like it? &lt;br&gt;
Follow me on &lt;a href="https://twitter.com/asksouley" rel="noopener noreferrer"&gt;@asksouley&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ansible</category>
      <category>devops</category>
      <category>cloudskills</category>
    </item>
    <item>
      <title>Interact with AWS S3 using boto3, AWS's SDK for Python . Part 1</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Tue, 08 Nov 2022 14:24:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/interact-with-aws-s3-using-boto3-awss-sdk-for-python-part-1-5605</link>
      <guid>https://dev.to/aws-builders/interact-with-aws-s3-using-boto3-awss-sdk-for-python-part-1-5605</guid>
      <description>&lt;p&gt;Boto3 is an amazon AWS SKD (Software Development Kit ) for Python.  Using boto3, it is possible to create , update and delete resources on AWS S3 and EC3  by writing the appropriate Python's Scripts.  Here in this blogpost,  I will be focusing Amazon Simple Storage Service S3 and show how to create and S3 Bucket. Boto3 is fully maintained and published by Amazon Web Services.&lt;br&gt;
To learn more about Boto3 , please check the link &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Bellow are the prerequisites&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;   Have an aws account&lt;/li&gt;
&lt;li&gt;   Lunch an EC2 Instance and SSH into it.&lt;/li&gt;
&lt;li&gt;   Install Python3&lt;/li&gt;
&lt;li&gt;   Install pip&lt;/li&gt;
&lt;li&gt;   Install Boto3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Installing python &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo amazon-linux-extras install python3.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;** Installing pip **&lt;br&gt;
 Once you ssh into your EC2 instance , run the command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum -y install python-pip

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is no need to install aws cli  here since it already comes preinstalled when using amazon linux 2 EC2 instance.&lt;br&gt;
 Once installed you can check the version by running the code bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --version 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Installing Boto3&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install boto3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have boto3 installed you can also check the version by running the command bellow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have create the appropriate role and attached it to the instance , you should be able to  make some call  base on the permission that you passed to the instance via the AIM role &lt;/p&gt;

&lt;p&gt;Create an IAM role and attach it to your ec2 instance. On that role you will specify the permission needed to complete your tasks which in here is to interact with aws S3 .I granted full access control to that role &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a new role&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your instance &lt;/li&gt;
&lt;li&gt;Click on the drop down arrow and select security and then  Modify the IAM Role and then chose the new IAM role you created&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here  I have create an new role and granted it Full Access to S3 and now  I should be able to get some response back from my aws resources. &lt;/p&gt;

&lt;p&gt;Run the following command and  to see the List all the buckets &lt;br&gt;
 you have in your account account&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya27fbgk3on0gii7soyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya27fbgk3on0gii7soyq.png" alt=" " width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a new Bucket named  volt123zzzz
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mb s3://volt123zzzz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio9bd89q3i5ogqnncmvt.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio9bd89q3i5ogqnncmvt.JPG" alt="bucketBoto3" width="800" height="422"&gt;&lt;/a&gt;&lt;br&gt;
 As you can see a new bucket named(volt123zzzz) was created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remove bucket  named  volt123zzzz
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 rb s3://volt123zzzz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once run. the newly created bucket (volt123zzzz) will be removed &lt;/p&gt;

&lt;p&gt;In later posts,  I will dive deep in and show  more functionalities and   of aws and boto2&lt;/p&gt;

&lt;p&gt;Lets  link up on twitter &lt;a href="https://twitter.com/asksouley" rel="noopener noreferrer"&gt;@asksouley&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>storage</category>
      <category>boto3</category>
    </item>
    <item>
      <title>Storage Performance: What are Latency, IOPS and Throughput and why should I care ?</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Wed, 14 Sep 2022 03:48:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/storage-performance-what-are-latency-iops-and-throughput-and-why-should-i-care--4k2b</link>
      <guid>https://dev.to/aws-builders/storage-performance-what-are-latency-iops-and-throughput-and-why-should-i-care--4k2b</guid>
      <description>&lt;p&gt;When it comes to storage efficiency and performance, some of the basic key terms to understand are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Latency&lt;/li&gt;
&lt;li&gt;IOPS &lt;/li&gt;
&lt;li&gt;Throughput&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Having a full grasp around these key terms will help pick the best storage for your application.  &lt;/p&gt;

&lt;p&gt;In addition to picking up the right the volume type for your application , it is also important to confirm that the volume picked is compatible with the instance type you plan on using.&lt;/p&gt;

&lt;p&gt;I know it seems like a lot of figure out but thankfully AWS compute Optimizer can help alleviate that burden by identifying optimal configurations using machine learning. &lt;/p&gt;

&lt;p&gt;To learn more about  instance and volume compatibility, check the resource that AWS has put together &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html" rel="noopener noreferrer"&gt;here &lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Latency
&lt;/h2&gt;

&lt;p&gt;Latency is the true end-to-end client time of an I/O operation; in other words, it is the time elapsed between sending an I/O to EBS and receiving an acknowledgement from EBS that the I/O read or write is complete. &lt;br&gt;
It is measured in unite of time and the lower the latency is the faster is the transaction time. &lt;/p&gt;

&lt;p&gt;It is worth mentioning that Several other factors can affect the latency as well. Among them are block size and QueueLenght. The QueueLenght the number of transaction request pending. However the  QueueLenght can be also controlled by assigning a value to it. That value will determined how many IP operation can line up.&lt;/p&gt;

&lt;h2&gt;
  
  
  IOPS
&lt;/h2&gt;

&lt;p&gt;IOPS  is the Input Output Operation per second, in other terms, it is the measurement of the number read and write operations that can be carried out one second and that is used to asset the performance of a storage . SSD are the best type of storage when higher IOPS is needed .&lt;/p&gt;

&lt;h2&gt;
  
  
  Throughput
&lt;/h2&gt;

&lt;p&gt;Throughput measures the ability for a storage to handle  read and write operation of large sequential data files and it is measured in Megabytes per Seconds (MB/s).&lt;/p&gt;

&lt;p&gt;If your application for example is serving video files online, your need will be redirected toward a HDD base  EBS Volumes since they are optimize to work very will with transactions where the dominant performance attribute is Throughput.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>storage</category>
      <category>cloudskills</category>
      <category>ebs</category>
    </item>
    <item>
      <title>Amazon S3. 6 Essential CLI Commands to better manage S3 Buckets.</title>
      <dc:creator>Souleymane. Tiendrebeogo</dc:creator>
      <pubDate>Thu, 08 Sep 2022 06:47:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-s3-6-essential-cli-commanda-to-better-manage-s3-buckets-4c6k</link>
      <guid>https://dev.to/aws-builders/amazon-s3-6-essential-cli-commanda-to-better-manage-s3-buckets-4c6k</guid>
      <description>&lt;p&gt;Amazon S3 or simple Simple Storage Service is a simple an object storage service offered by Amazon Web Service. With S3, the users can store various type of data (images, texts and or videos) as object in buckets. &lt;/p&gt;

&lt;p&gt;AWS has a great web interface for creating , deleting and managing S3 buckets but it is nothing compared to the flexibility and the speed that the CLI offers&lt;/p&gt;

&lt;p&gt;The list of CLI Command available is extensive. To lean more and go beyond what I will be converting in this blog post, click &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/index.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Bellow are 6 CLI Commands that any cloud engineer must know when dealing with Amazon S3.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The “mb” command Creating a new S3 Bucket.
&lt;/h2&gt;

&lt;p&gt;This command is used to create new S3 Buckets. Make sure that the bucket name is unique.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Aws s3 mb myBucketName&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mb myBucketName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. The “ls “command.
&lt;/h2&gt;

&lt;p&gt;Used to inspect buckets by listing Buckets or the content of Buckets.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;aws s3 ls&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 myUnikMajicBucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. The “mv” command.
&lt;/h2&gt;

&lt;p&gt;This is used to move things around. From local to S3, from S3 to local as well as between S3 Buckets. &lt;br&gt;
 People tend to get confused between the “mv” and the “cp” or copy command.  mv is used to move files from location A to location B (The files no longer in location A). When “cp” is used, the file is in both location A and location B.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Move files from local to S3&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mv file1_name.txt s3://bucket-name/file2_name.txt_
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Move from S3 to local&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mv s3://bucket_name/file1_name.txt file2_name.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Move files between S3 Buckets&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mv s3://bucket_name/file1_name.txt s3://bucket2_name /file2_name_2.txt  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. The “rb” command
&lt;/h2&gt;

&lt;p&gt;This is used to delete or remove S3 Buckets.  This will only work if the bucket is empty&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3  rb myBucket_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Used to force  delete a bucket along with his contents&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 rb Bucket_name --force  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. The “rm “command.
&lt;/h2&gt;

&lt;p&gt;This is used to delete the content of an S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 rm &amp;lt;s3url_to_the_file&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. The “sync “command.
&lt;/h2&gt;

&lt;p&gt;This is used to sync or update files either from local to s3, from s3 to local or  between s3 buckets&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 sync ./local_folder s3"//mybucket_name

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>s3</category>
      <category>storage</category>
      <category>cloudskills</category>
    </item>
  </channel>
</rss>
