<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yasitha Bogamuwa</title>
    <description>The latest articles on DEV Community by Yasitha Bogamuwa (@yasithab).</description>
    <link>https://dev.to/yasithab</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yasithab"/>
    <language>en</language>
    <item>
      <title>Analyze AWS Application Load Balancer logs using Amazon Athena</title>
      <dc:creator>Yasitha Bogamuwa</dc:creator>
      <pubDate>Sat, 09 Mar 2024 21:31:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/analyze-aws-application-load-balancer-logs-using-amazon-athena-nhp</link>
      <guid>https://dev.to/aws-builders/analyze-aws-application-load-balancer-logs-using-amazon-athena-nhp</guid>
      <description>&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;The AWS Application Load Balancer logs show details about the requests sent to your ALB, like where they came from, what was requested, and if it was successful. These logs help with fixing problems and understanding how well your system is working.&lt;/p&gt;

&lt;p&gt;I recently wanted to analyze ALB access logs to identify suspicious activity. However, the logs are stored in gzip format in an S3 bucket with hundreds of thousands of files, making manual analysis impractical. To analyze these logs, you can use Amazon Athena, a tool that lets you query data stored in Amazon S3 using regular SQL commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. What is Amazon Athena
&lt;/h2&gt;

&lt;p&gt;Amazon Athena is an interactive query service provided by AWS that allows you to analyze and query data stored in Amazon S3 using standard SQL. It enables you to run ad-hoc queries on data in S3 without the need for complex ETL processes or data movement.&lt;/p&gt;

&lt;p&gt;Since Athena is serverless, you don't need to handle any infrastructure, and you are charged solely based on the queries you execute.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Analyzing ALB Access Logs with Athena
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy20m086xgr0h1xnp9bgd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy20m086xgr0h1xnp9bgd.jpg" alt="Analyzing ALB Access Logs with Athena" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.1. Make sure to enable Application Load Balancer access logs as described &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/enable-access-logging.html"&gt;here&lt;/a&gt; so that the access logs can be saved to your Amazon S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fynvmqxltmd5sudeyzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fynvmqxltmd5sudeyzq.png" alt="Enable Application Load Balancer access logs" width="800" height="1276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.2. Open the Athena console and click &lt;strong&gt;Launch query editor&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb45uj84gcqp8mbra9a51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb45uj84gcqp8mbra9a51.png" alt="Athena console" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.3. Create an Athena database and table for Application Load Balancer logs. To create an Athena database, please run the following command in Query Editor. It's recommended to create the database in the same AWS Region as the Amazon S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;DATABASE_NAME&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneaqghgci7b3yhgcc09j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneaqghgci7b3yhgcc09j.png" alt="Create an Athena database" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.4. Then, select the database from the dropdown and create an &lt;strong&gt;alb_logs&lt;/strong&gt; table for the ALB logs. Make sure to replace the &lt;em&gt;&amp;lt;YOUR-ALB-LOGS-DIRECTORY&amp;gt;&lt;/em&gt;, &lt;em&gt;&amp;lt;ACCOUNT-ID&amp;gt;&lt;/em&gt;, and &lt;em&gt;&amp;lt;REGION&amp;gt;&lt;/em&gt; with the correct values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;EXTERNAL&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;alb_logs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;time&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;elb&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;client_ip&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;client_port&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;target_ip&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;target_port&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;request_processing_time&lt;/span&gt; &lt;span class="nb"&gt;double&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;target_processing_time&lt;/span&gt; &lt;span class="nb"&gt;double&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;response_processing_time&lt;/span&gt; &lt;span class="nb"&gt;double&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;elb_status_code&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;target_status_code&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;received_bytes&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;sent_bytes&lt;/span&gt; &lt;span class="nb"&gt;bigint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;request_verb&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;request_url&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;request_proto&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;user_agent&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;ssl_cipher&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;ssl_protocol&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;target_group_arn&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;trace_id&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;domain_name&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;chosen_cert_arn&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;matched_rule_priority&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;request_creation_time&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;actions_executed&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;redirect_url&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;lambda_error_reason&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;target_port_list&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;target_status_code_list&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;classification&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;classification_reason&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;ROW&lt;/span&gt; &lt;span class="n"&gt;FORMAT&lt;/span&gt; &lt;span class="n"&gt;SERDE&lt;/span&gt; &lt;span class="s1"&gt;'org.apache.hadoop.hive.serde2.RegexSerDe'&lt;/span&gt;
            &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="n"&gt;SERDEPROPERTIES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="s1"&gt;'serialization.format'&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s1"&gt;'input.regex'&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; 
        &lt;span class="s1"&gt;'([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*)[:-]([0-9]*) ([-.0-9]*) ([-.0-9]*) ([-.0-9]*) (|[-0-9]*) (-|[-0-9]*) ([-0-9]*) ([-0-9]*) &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^ ]*) (.*) (- |[^ ]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; ([A-Z0-9-_]+) ([A-Za-z0-9.-]*) ([^ ]*) &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; ([-.0-9]*) ([^ ]*) &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^ ]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="s1"&gt;]+?)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="s1"&gt;]+)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^ ]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;([^ ]*)&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;LOCATION&lt;/span&gt; &lt;span class="s1"&gt;'s3://&amp;lt;YOUR-ALB-LOGS-DIRECTORY&amp;gt;/AWSLogs/&amp;lt;ACCOUNT-ID&amp;gt;/elasticloadbalancing/&amp;lt;REGION&amp;gt;/'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihbw628uwvtqm1eu7302.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihbw628uwvtqm1eu7302.png" alt="Create Table" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.5. In the Query Editor &lt;strong&gt;settings&lt;/strong&gt;, choose an S3 bucket to store the results of your Athena queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uc57xop929804f1slh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uc57xop929804f1slh6.png" alt="Athena Settings" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.6. Now you can use SQL syntax to query the access logs.&lt;/p&gt;

&lt;p&gt;The following SQL query in counts the occurrences of different request verbs for requests containing '&lt;strong&gt;hs&lt;/strong&gt;' in the URL from the '&lt;strong&gt;alb_logs&lt;/strong&gt;' table. It groups the results by request verb, client IP, and request URL, and limits the output to the first 100 results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request_verb&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt;
 &lt;span class="k"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;request_verb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;client_ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;request_url&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;alb_logs&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;request_url&lt;/span&gt; &lt;span class="k"&gt;LIKE&lt;/span&gt; &lt;span class="s1"&gt;'%hs%'&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;request_verb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client_ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request_url&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ixu47xk9hei8cczs1de.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ixu47xk9hei8cczs1de.png" alt="Athena query" width="800" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. References
&lt;/h2&gt;

&lt;p&gt;4.1. &lt;a href="https://docs.aws.amazon.com/athena/latest/ug/application-load-balancer-logs.html?source=post_page-----ce788119fc97--------------------------------"&gt;Querying Application Load Balancer logs&lt;/a&gt;&lt;br&gt;
4.2. &lt;a href="https://github.com/aws/elastic-load-balancing-tools/blob/master/amazon-athena-for-elb/Analyzing_ALB_Access_Logs_with_Amazon_Athena.md"&gt;Analyzing ALB Access Logs with Amazon Athena&lt;/a&gt;&lt;br&gt;
4.3. &lt;a href="https://repost.aws/knowledge-center/athena-analyze-access-logs"&gt;How do I use Amazon Athena to analyze ALB access logs&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Unveiling the Speed Mystery: Investigating Slow S3 Uploads from AWS EKS Pods</title>
      <dc:creator>Yasitha Bogamuwa</dc:creator>
      <pubDate>Sun, 30 Jul 2023 13:06:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/unveiling-the-speed-mystery-investigating-slow-s3-uploads-from-aws-eks-pods-4k</link>
      <guid>https://dev.to/aws-builders/unveiling-the-speed-mystery-investigating-slow-s3-uploads-from-aws-eks-pods-4k</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;At my current workplace, we have been using AWS EKS unmanaged clusters. Recently, our engineering teams have noticed an alarming issue with unreasonably slow S3 file uploads from their applications. &lt;/p&gt;

&lt;p&gt;To investigate further, they conducted a comparison by running the same application on their local computers. Surprisingly, they observed that uploading a 10KB file to S3 took approximately 5 seconds on our EKS cluster. However, on their local machines, the identical upload process took approximately 1 second. Consistent testing revealed an average slowness factor of 5 when compared to the local computer uploads.&lt;/p&gt;

&lt;p&gt;This noticeable increase in latency raised significant concerns, urging us to uncover the underlying root cause to identify possible solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Investigation
&lt;/h2&gt;

&lt;p&gt;To identify the issue and locate the bottleneck, we came up with the following plan:&lt;/p&gt;

&lt;p&gt;1. Create a S3 bucket in the region where our EKS cluster is running.&lt;/p&gt;

&lt;p&gt;2. Deploy a debug pod in the EKS cluster using the latest AWS CLI docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Debug pod configurations&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amazon/aws-cli&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sleep"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;infinity"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Locate the EKS node where the debug pod is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;debug &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4. Create a 10KB file and upload it to the S3 bucket using AWS CLI from both the debug pod and EKS node where the debug pod is deployed. Measure the time taken for each upload.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AWS CLI upload command&lt;/span&gt;
&lt;span class="nb"&gt;time &lt;/span&gt;aws s3 &lt;span class="nt"&gt;--debug&lt;/span&gt; &lt;span class="nb"&gt;cp &lt;/span&gt;10kb.png s3://yasithab-debug-bucket/10kb.png
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Identifying the Issue
&lt;/h2&gt;

&lt;p&gt;Based on the test conducted, it was discovered that uploading the 10KB file from the EKS node takes around 1 second, which is similar to uploads from a local computer. However, uploading the same file from the pod running on the same EKS node takes approximately 5 seconds. This indicates that there is something in between causing this issue, but the exact cause is still unknown.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AWS CLI upload results from the EKS node where the debug pod is running&lt;/span&gt;
real    0m1.460s
user    0m0.908s
sys     0m0.085s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AWS CLI upload results from the debug pod&lt;/span&gt;
real    0m5.612s
user    0m0.956s
sys     0m0.073s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we have AWS enterprise support, we reached out to them with the test results and debug logs. They recommended provisioning a new EKS cluster and conducting the same test to isolate any other dependencies. However, we decided against this approach as we consider it unnecessary to set up a new cluster solely for debugging this issue. Therefore, we are currently on our own in resolving this matter, unfortunately.&lt;/p&gt;

&lt;p&gt;With no clear direction at this stage, a possible course of action is to compare the AWS CLI debug logs for both node and pod uploads. This can be achieved by utilizing a file comparison tool. Initially, the focus was on investigating potential network delays by comparing the duration of the upload process.&lt;/p&gt;

&lt;p&gt;The upload process began when the CLI displayed the following lines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;MainThread - botocore.endpoint - DEBUG - Setting s3 &lt;span class="nb"&gt;timeout &lt;/span&gt;as &lt;span class="o"&gt;(&lt;/span&gt;60, 60&lt;span class="o"&gt;)&lt;/span&gt;
MainThread - botocore.utils - DEBUG - Registering S3 region redirector handler
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The upload process ended when the CLI displayed the following lines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;upload: ./10kb.png to s3://yasithab-debug-bucket/10kb.png
Thread-1 - awscli.customizations.s3.results - DEBUG - Shutdown request received &lt;span class="k"&gt;in &lt;/span&gt;result processing thread, shutting down result thread.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, the duration from the beginning of the upload to the end of the upload event was calculated.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the node upload process, the start time was recorded as 2023-07-13 11:03:14,047, and the end time was recorded as 2023-07-13 11:03:14,161. Hence, the upload process took 114 milliseconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the pod upload process, the start time was recorded as 2023-07-13 10:57:54,740, and the end time was recorded as 2023-07-13 10:57:54,828. Hence, the upload process took 88 milliseconds.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The almost identical upload times indicate that this issue is not caused by network latency.&lt;/p&gt;

&lt;p&gt;Upon reviewing the logs further, it was observed that in the pod upload debug log, most of the time was spent on establishing the connection to the Instance Metadata Service (IMDS) endpoint. This process took around 1 second and was attempted three times, totaling approximately 4 seconds. On the other hand, the initialization of the connection to the IMDS endpoint from the EKS node where the debug pod is deployed, only took 190 milliseconds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# From the EKS node where the debug pod is deployed - 190 milliseconds&lt;/span&gt;
2023-07-13 11:03:13,857 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/
2023-07-13 11:03:14,047 - MainThread - botocore.endpoint - DEBUG - Setting s3 &lt;span class="nb"&gt;timeout &lt;/span&gt;as &lt;span class="o"&gt;(&lt;/span&gt;60, 60&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# From the debug pod - 4 seconds and 259 milliseconds&lt;/span&gt;
2023-07-13 10:57:50,481 - MainThread - botocore.utils - DEBUG - IMDS ENDPOINT: http://169.254.169.254/
2023-07-13 10:57:54,740 - MainThread - botocore.endpoint - DEBUG - Setting s3 &lt;span class="nb"&gt;timeout &lt;/span&gt;as &lt;span class="o"&gt;(&lt;/span&gt;60, 60&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When comparing the total time taken from the debug pod (5 seconds and 612 milliseconds) with the time taken for establishing the connection to the Instance Metadata Service (IMDS) endpoint from the debug pod (4 seconds and 259 milliseconds), a difference of 1 second and 353 milliseconds is observed. Remarkably, this difference closely matches the time taken from the EKS node upload results.&lt;/p&gt;

&lt;p&gt;Hooray! The Instance Metadata Service (IMDS) has been identified as the bottleneck. However, the next step is to understand why this is happening and determine how to resolve it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is IMDS and What it does here?
&lt;/h2&gt;

&lt;p&gt;The AWS Instance Metadata Service (IMDS) is a service that allows EC2 instances to access information about themselves. This information includes details like the instance ID, instance type, network configurations, identity credentials, security groups, and more. Applications running on EC2 instances can use IMDS to retrieve dynamic information about the instance. IMDS has two versions available.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instance Metadata Service Version 1 (IMDSv1) – a request/response method.&lt;/li&gt;
&lt;li&gt;Instance Metadata Service Version 2 (IMDSv2) – a session-oriented method.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, you have the option to use either IMDSv1 or IMDSv2, or both. The instance metadata service differentiates between IMDSv1 and IMDSv2 requests based on the presence of specific &lt;code&gt;PUT&lt;/code&gt; or &lt;code&gt;GET&lt;/code&gt; headers, which are exclusive to IMDSv2. If you configure IMDSv2 to be used in the &lt;strong&gt;MetadataOptions&lt;/strong&gt;, IMDSv1 will no longer function.&lt;/p&gt;

&lt;p&gt;To retrieve the &lt;strong&gt;MetadataOptions&lt;/strong&gt; for an EKS node, use the following command. Replace &lt;code&gt;&amp;lt;EKS_NODE_INSTANCE_ID&amp;gt;&lt;/code&gt; with the specific instance ID of the EKS node you wish to retrieve the &lt;strong&gt;MetadataOptions&lt;/strong&gt; for.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ec2 describe-instances &lt;span class="nt"&gt;--instance-ids&lt;/span&gt; &amp;lt;EKS_NODE_INSTANCE_ID&amp;gt; &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"Reservations[].Instances[].MetadataOptions"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will give an output similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"State"&lt;/span&gt;: &lt;span class="s2"&gt;"applied"&lt;/span&gt;,
        &lt;span class="s2"&gt;"HttpTokens"&lt;/span&gt;: &lt;span class="s2"&gt;"optional"&lt;/span&gt;,
        &lt;span class="s2"&gt;"HttpPutResponseHopLimit"&lt;/span&gt;: 1,
        &lt;span class="s2"&gt;"HttpEndpoint"&lt;/span&gt;: &lt;span class="s2"&gt;"enabled"&lt;/span&gt;,
        &lt;span class="s2"&gt;"HttpProtocolIpv6"&lt;/span&gt;: &lt;span class="s2"&gt;"disabled"&lt;/span&gt;,
        &lt;span class="s2"&gt;"InstanceMetadataTags"&lt;/span&gt;: &lt;span class="s2"&gt;"disabled"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the output, it shows that the &lt;strong&gt;HttpTokens&lt;/strong&gt; attribute is set to &lt;code&gt;optional&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When the &lt;strong&gt;HttpTokens&lt;/strong&gt; is set to &lt;code&gt;optional&lt;/code&gt;, it sets the use of IMDSv2 to optional. You can retrieve instance metadata with or without a session token. Without a token, IMDSv1 role credentials are returned, and with a token, IMDSv2 role credentials are returned.&lt;/p&gt;

&lt;p&gt;On the other hand, when &lt;strong&gt;HttpTokens&lt;/strong&gt; is set to &lt;code&gt;required&lt;/code&gt;, IMDSv2 becomes mandatory. You must include a session token in your instance metadata retrieval requests. In this case, only IMDSv2 credentials are available; IMDSv1 credentials cannot be accessed.&lt;/p&gt;

&lt;p&gt;IMDSv2 is an improved version of instance metadata access that adds an extra layer of security against unauthorized access. To use IMDSv2, a PUT request is required to establish a session with the instance metadata service and obtain a token. By &lt;strong&gt;default&lt;/strong&gt;, the response to PUT requests has a response &lt;strong&gt;hop limit&lt;/strong&gt; of &lt;strong&gt;1&lt;/strong&gt; at the IP protocol level. &lt;/p&gt;

&lt;p&gt;However, &lt;strong&gt;this limit is not suitable for containerized applications&lt;/strong&gt; on Kubernetes that operate in a separate network namespace from the instance. EKS managed node groups that are newly launched or updated will have a metadata token response hop limit of 2. However, in the case of self-managed nodes like ours, the default hop limit is set to 1. Therefore, setting the &lt;strong&gt;HttpPutResponseHopLimit&lt;/strong&gt; to &lt;strong&gt;2&lt;/strong&gt; in our EKS is mandatory for retrieving InstanceMetadata over IMDSv2.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does the upload work with HttpPutResponseHopLimit set to 1 despite the slow upload speed?
&lt;/h2&gt;

&lt;p&gt;When the HttpTokens attribute is set to &lt;code&gt;optional&lt;/code&gt;, the AWS CLI/SDK automatically use IMDSv2 calls by default. If the initial IMDSv2 call does not receive a response, the CLI/SDK retries the call and, if still unsuccessful, fallback to using IMDSv1. This fallback process can introduce delays, especially in a container environment.&lt;/p&gt;

&lt;p&gt;In a container environment, if the hop limit is set to 1, the IMDSv2 response does not return because accessing the container is considered an additional network hop. To avoid the fallback to IMDSv1 and the resulting delay, it is recommended to set the hop limit to 2 in a container environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resolve the issue
&lt;/h2&gt;

&lt;p&gt;According to AWS documentation, containerized applications running on EKS require the EKS node &lt;strong&gt;HttpPutResponseHopLimit&lt;/strong&gt; in MetadataOptions to be set to &lt;strong&gt;2&lt;/strong&gt;. To configure this, you can use the following command, replacing &lt;code&gt;&amp;lt;EKS_NODE_INSTANCE_ID&amp;gt;&lt;/code&gt; with the instance ID of the specific EKS node you want to retrieve MetadataOptions for.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ec2 modify-instance-metadata-options &lt;span class="nt"&gt;--instance-id&lt;/span&gt; &amp;lt;EKS_NODE_INSTANCE_ID&amp;gt; &lt;span class="nt"&gt;--http-endpoint&lt;/span&gt; enabled &lt;span class="nt"&gt;--http-put-response-hop-limit&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In many cases, EKS nodes are part of AWS auto scaling groups, making manual modification of instance metadata difficult. To configure Instance Metadata Service (IMDS) for an auto scaling group, you need to modify its associated launch configurations.&lt;/p&gt;

&lt;p&gt;To set the &lt;strong&gt;HttpPutResponseHopLimit&lt;/strong&gt; for all instances associated with the launch configuration, go to the &lt;strong&gt;Additional configuration&lt;/strong&gt; section under &lt;strong&gt;Advanced details&lt;/strong&gt; of the template. Set the &lt;strong&gt;Metadata response hop limit&lt;/strong&gt; to &lt;strong&gt;2&lt;/strong&gt;. If no value is specified, the default is set to 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60w4foj8il8lrpow1id5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60w4foj8il8lrpow1id5.png" alt="Launch Configuration Metadata"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When using Terraform to provision your infrastructure, you can utilize the &lt;code&gt;aws_launch_template&lt;/code&gt; resource to modify instance metadata in the following manner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;resource &lt;span class="s2"&gt;"aws_launch_template"&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"foo"&lt;/span&gt;

  metadata_options &lt;span class="o"&gt;{&lt;/span&gt;
    http_endpoint               &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"enabled"&lt;/span&gt;
    http_tokens                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"optional"&lt;/span&gt;
    instance_metadata_tags      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"disabled"&lt;/span&gt;
    http_put_response_hop_limit &lt;span class="o"&gt;=&lt;/span&gt; 2
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After implementing the &lt;strong&gt;HttpPutResponseHopLimit&lt;/strong&gt; modifications, we conducted the same tests as in the investigation phase. The results of both the EKS node and debug pod uploads were nearly identical.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AWS CLI upload results from the EKS node where the debug pod is running&lt;/span&gt;
real    0m1.310s
user    0m0.912s
sys     0m0.078s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AWS CLI upload results from the debug pod&lt;/span&gt;
real    0m1.298s
user    0m0.931s
sys     0m0.082s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, our investigation into the significantly slow S3 file uploads from applications running on AWS EKS clusters revealed that the Instance Metadata Service (IMDS) was the root cause of the issue. By comparing upload times from EKS nodes and pods, we identified a delay in establishing the connection to the IMDS endpoint from within the pods. This delay was not present when uploading directly from the EKS nodes.&lt;/p&gt;

&lt;p&gt;To resolve this issue, it is necessary to set the &lt;strong&gt;HttpPutResponseHopLimit&lt;/strong&gt; in the MetadataOptions to &lt;strong&gt;2&lt;/strong&gt; for EKS nodes. This can be achieved by modifying the associated launch configurations for auto scaling groups or using the &lt;code&gt;aws_launch_template&lt;/code&gt; resource in Terraform.&lt;/p&gt;

&lt;p&gt;By setting up the &lt;strong&gt;HttpPutResponseHopLimit&lt;/strong&gt; to &lt;strong&gt;2&lt;/strong&gt;, we can eliminate the latency and significantly improve the speed of file uploads to S3 from AWS EKS pods. This resolution will enhance the overall performance and efficiency of our applications on the EKS cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-eks-supports-ec2-instance-metadata-service-v2/" rel="noopener noreferrer"&gt;Amazon EKS now supports EC2 Instance Metadata Service v2&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html" rel="noopener noreferrer"&gt;Describe EC2 MetadataOptions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html#imds-considerations" rel="noopener noreferrer"&gt;Retrieve instance metadata&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-instance-metadata-options.html" rel="noopener noreferrer"&gt;Modify Instance Metadata&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/23110" rel="noopener noreferrer"&gt;Issue with EC2 Instance Metadata running inside Container&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-config.html#launch-configurations-imds" rel="noopener noreferrer"&gt;Configure the instance metadata options for Auto Scaling Groups&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>s3</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Multi-Master Kubernetes Cluster Setup with CRI-O and vSphere Storage on Rocky Linux 8</title>
      <dc:creator>Yasitha Bogamuwa</dc:creator>
      <pubDate>Thu, 16 Jun 2022 16:15:58 +0000</pubDate>
      <link>https://dev.to/yasithab/multi-master-kubernetes-cluster-setup-with-cri-o-and-vsphere-storage-on-rocky-linux-8-130i</link>
      <guid>https://dev.to/yasithab/multi-master-kubernetes-cluster-setup-with-cri-o-and-vsphere-storage-on-rocky-linux-8-130i</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16aspsg9s9nwqvxemksf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16aspsg9s9nwqvxemksf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. System Requirements
&lt;/h2&gt;

&lt;p&gt;Before you start installing Kubernetes, you must understand your capacity requirements and provision resources accordingly. The following resource allocations are based entirely on my needs, and to certain extent, you may need to downgrade or upgrade resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1. Master Nodes
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Number of VMs&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;2 Cores&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory&lt;/td&gt;
&lt;td&gt;8 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disk Size&lt;/td&gt;
&lt;td&gt;150 GB SSD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage Type&lt;/td&gt;
&lt;td&gt;Thin Provision&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operating System&lt;/td&gt;
&lt;td&gt;Rocky Linux 8 x64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File System&lt;/td&gt;
&lt;td&gt;XFS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privileges&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;ROOT&lt;/strong&gt; access prefered&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  2.2. Worker Nodes
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Number of VMs&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;4 Cores&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory&lt;/td&gt;
&lt;td&gt;16 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disk Size&lt;/td&gt;
&lt;td&gt;500 GB SSD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage Type&lt;/td&gt;
&lt;td&gt;Thin Provision&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operating System&lt;/td&gt;
&lt;td&gt;Rocky Linux 8 x64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File System&lt;/td&gt;
&lt;td&gt;XFS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privileges&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;ROOT&lt;/strong&gt; access prefered&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  2.3. Nginx Load Balancers
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Number of VMs&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;2 Cores&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory&lt;/td&gt;
&lt;td&gt;4 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disk Size&lt;/td&gt;
&lt;td&gt;20 GB SSD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage Type&lt;/td&gt;
&lt;td&gt;Thin Provision&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operating System&lt;/td&gt;
&lt;td&gt;Rocky Linux 8 x64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File System&lt;/td&gt;
&lt;td&gt;XFS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privileges&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;ROOT&lt;/strong&gt; access prefered&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  2.4. IP Allocations
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Load Balancer Virtual IP&lt;/td&gt;
&lt;td&gt;192.168.16.80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VM IPs&lt;/td&gt;
&lt;td&gt;192.168.16.100 - 192.168.16.108&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MetalLB IP Pool&lt;/td&gt;
&lt;td&gt;192.168.16.200 - 192.168.16.250&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  2.5. DNS Entries
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;IP&lt;/th&gt;
&lt;th&gt;Hostname&lt;/th&gt;
&lt;th&gt;FQDN&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.80&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;kube-api.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.100&lt;/td&gt;
&lt;td&gt;kubelb01&lt;/td&gt;
&lt;td&gt;kubelb01.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.101&lt;/td&gt;
&lt;td&gt;kubelb02&lt;/td&gt;
&lt;td&gt;kubelb02.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.102&lt;/td&gt;
&lt;td&gt;kubemaster01&lt;/td&gt;
&lt;td&gt;kubemaster01.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.103&lt;/td&gt;
&lt;td&gt;kubemaster02&lt;/td&gt;
&lt;td&gt;kubemaster02.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.104&lt;/td&gt;
&lt;td&gt;kubemaster03&lt;/td&gt;
&lt;td&gt;kubemaster03.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.105&lt;/td&gt;
&lt;td&gt;kubeworker01&lt;/td&gt;
&lt;td&gt;kubeworker01.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.106&lt;/td&gt;
&lt;td&gt;kubeworker02&lt;/td&gt;
&lt;td&gt;kubeworker02.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;192.168.16.107&lt;/td&gt;
&lt;td&gt;kubeworker03&lt;/td&gt;
&lt;td&gt;kubeworker03.example.local&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  2.6. VMware Roles and Service Accounts
&lt;/h3&gt;

&lt;p&gt;2.6.1. In order to create a VMware role, navigate to &lt;strong&gt;Menu -&amp;gt; Administration -&amp;gt; Roles&lt;/strong&gt; in the vSphere Client. The roles and permissions required for dynamic provisioning can be found &lt;a href="https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/master/documentation/vcp-roles.md#dynamic-provisioning" rel="noopener noreferrer"&gt;here&lt;/a&gt;. For this example I will create a role called &lt;strong&gt;manage-kubernetes-node-vms-and-volumes&lt;/strong&gt; with the following permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foeate9u2ksuim7gboz0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foeate9u2ksuim7gboz0f.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.6.2. Navigate to &lt;strong&gt;Menu -&amp;gt; Administration -&amp;gt; Single Sign On -&amp;gt; Users and Groups&lt;/strong&gt; in the vSphere Client and create a new user. For this example I will create a user called &lt;code&gt;"kubernetes@vsphere.local"&lt;/code&gt; with the password &lt;code&gt;"KuB3VMwar3"&lt;/code&gt;. For now, please make sure &lt;strong&gt;NOT&lt;/strong&gt; to use special characters in the password.&lt;/p&gt;

&lt;p&gt;2.6.3. Navigate to &lt;strong&gt;Menu -&amp;gt; Administration -&amp;gt; Global Permissions&lt;/strong&gt; in the vSphere Client and click add permission (+) icon. Then select newly created role and map it to kubernetes user. Please make sure to select &lt;strong&gt;Propagate to children&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4jwwq0bj6a943pqpddi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4jwwq0bj6a943pqpddi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Configure Nginx Load Balancers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Verify the &lt;strong&gt;MAC address&lt;/strong&gt; and &lt;strong&gt;product_uuid&lt;/strong&gt; are unique for every node. You can get the MAC address of the network interfaces using the below command.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ip &lt;span class="nb"&gt;link&lt;/span&gt; | &lt;span class="nb"&gt;grep link&lt;/span&gt;/ether


&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;product_uuid&lt;/strong&gt; can be checked by using the following command.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; /sys/class/dmi/id/product_uuid


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;3.1. Set server hostname.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Example:&lt;/span&gt;
&lt;span class="c"&gt;# hostnamectl set-hostname kubelb01&lt;/span&gt;

hostnamectl set-hostname &amp;lt;&lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.2. Install prerequisites.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Clean YUM repository cache&lt;/span&gt;
dnf clean all

&lt;span class="c"&gt;# Update packages&lt;/span&gt;
dnf update &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# Install prerequisites&lt;/span&gt;
dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; vim net-tools chrony ntpstat keepalived nginx policycoreutils-python-utils


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.3. Synchronize server time with &lt;strong&gt;Google NTP&lt;/strong&gt; server.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Add Google NTP Server&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/^pool/c\pool time.google.com iburst'&lt;/span&gt; /etc/chrony.conf

&lt;span class="c"&gt;# Set timezone to Asia/Colombo&lt;/span&gt;
timedatectl set-timezone Asia/Colombo

&lt;span class="c"&gt;# Enable NTP time synchronization&lt;/span&gt;
timedatectl set-ntp &lt;span class="nb"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.4. Start and enable &lt;em&gt;chronyd&lt;/em&gt; service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Start and enable chronyd service&lt;/span&gt;
systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; chronyd

&lt;span class="c"&gt;# Check if the chronyd service is running&lt;/span&gt;
systemctl status chronyd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.5. Display time synchronization status.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Verify synchronisation state&lt;/span&gt;
ntpstat

&lt;span class="c"&gt;# Check Chrony Source Statistics&lt;/span&gt;
chronyc sourcestats &lt;span class="nt"&gt;-v&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.6. Permanently disable SELinux.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Permanently disable SELinux&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/^SELINUX=enforcing$/SELINUX=disabled/'&lt;/span&gt; /etc/selinux/config


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.7. Disable IPv6 on network interface.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Disable IPv6 on ens192 interface&lt;/span&gt;
nmcli connection modify ens192 ipv6.method ignore


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.8. Execute the following commands to turn off all swap devices and files.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Permanently disable swapping&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'/swap/ s/^#*/#/g'&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/fstab

&lt;span class="c"&gt;# Disable all existing swaps from /proc/swaps&lt;/span&gt;
swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.9. Disable &lt;strong&gt;File Access Time Logging&lt;/strong&gt; and enable &lt;strong&gt;Combat Fragmentation&lt;/strong&gt; to enhance XFS file system performance. Add &lt;code&gt;noatime,nodiratime,allocsize=64m&lt;/code&gt; to all XFS volumes under &lt;code&gt;/etc/fstab&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Edit /etc/fstab&lt;/span&gt;
vim /etc/fstab

&lt;span class="c"&gt;# Modify XFS volume entries as follows&lt;/span&gt;
&lt;span class="c"&gt;# Example:&lt;/span&gt;
&lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"03c97344-9b3d-45e2-9140-cbbd57b6f085"&lt;/span&gt;  /  xfs  defaults,noatime,nodiratime,allocsize&lt;span class="o"&gt;=&lt;/span&gt;64m  0 0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.10. Tweaking the system for high concurrancy and security.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/sysctl.d/00-sysctl.conf &amp;gt; /dev/null
################################################################################################
# Tweak virtual memory
################################################################################################

# Default: 30
# 0 - Never swap under any circumstances.
# 1 - Do not swap unless there is an out-of-memory (OOM) condition.
vm.swappiness = 1

# vm.dirty_background_ratio is used to adjust how the kernel handles dirty pages that must be flushed to disk.
# Default value is 10.
# The value is a percentage of the total amount of system memory, and setting this value to 5 is appropriate in many situations.
# This setting should not be set to zero.
vm.dirty_background_ratio = 5

# The total number of dirty pages that are allowed before the kernel forces synchronous operations to flush them to disk
# can also be increased by changing the value of vm.dirty_ratio, increasing it to above the default of 30 (also a percentage of total system memory)
# vm.dirty_ratio value in-between 60 and 80 is a reasonable number.
vm.dirty_ratio = 60

# vm.max_map_count will calculate the current number of memory mapped files.
# The minimum value for mmap limit (vm.max_map_count) is the number of open files ulimit (cat /proc/sys/fs/file-max).
# map_count should be around 1 per 128 KB of system memory. Therefore, max_map_count will be 262144 on a 32 GB system.
# Default: 65530
vm.max_map_count = 2097152

################################################################################################
# Tweak file handles
################################################################################################

# Increases the size of file handles and inode cache and restricts core dumps.
fs.file-max = 2097152
fs.suid_dumpable = 0

################################################################################################
# Tweak network settings
################################################################################################

# Default amount of memory allocated for the send and receive buffers for each socket.
# This will significantly increase performance for large transfers.
net.core.wmem_default = 25165824
net.core.rmem_default = 25165824

# Maximum amount of memory allocated for the send and receive buffers for each socket.
# This will significantly increase performance for large transfers.
net.core.wmem_max = 25165824
net.core.rmem_max = 25165824

# In addition to the socket settings, the send and receive buffer sizes for
# TCP sockets must be set separately using the net.ipv4.tcp_wmem and net.ipv4.tcp_rmem parameters.
# These are set using three space-separated integers that specify the minimum, default, and maximum sizes, respectively.
# The maximum size cannot be larger than the values specified for all sockets using net.core.wmem_max and net.core.rmem_max.
# A reasonable setting is a 4 KiB minimum, 64 KiB default, and 2 MiB maximum buffer.
net.ipv4.tcp_wmem = 20480 12582912 25165824
net.ipv4.tcp_rmem = 20480 12582912 25165824

# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 25165824 262144
net.ipv4.udp_mem = 65536 25165824 262144

# Minimum amount of memory allocated for the send and receive buffers for each socket.
net.ipv4.udp_wmem_min = 16384
net.ipv4.udp_rmem_min = 16384

# Enabling TCP window scaling by setting net.ipv4.tcp_window_scaling to 1 will allow
# clients to transfer data more efficiently, and allow that data to be buffered on the broker side.
net.ipv4.tcp_window_scaling = 1

# Increasing the value of net.ipv4.tcp_max_syn_backlog above the default of 1024 will allow
# a greater number of simultaneous connections to be accepted.
net.ipv4.tcp_max_syn_backlog = 10240

# Increasing the value of net.core.netdev_max_backlog to greater than the default of 1000
# can assist with bursts of network traffic, specifically when using multigigabit network connection speeds,
# by allowing more packets to be queued for the kernel to process them.
net.core.netdev_max_backlog = 65536

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824

# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2

# Allowed local port range.
net.ipv4.ip_local_port_range = 2048 65535

# Protect Against TCP Time-Wait
# Default: net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rfc1337 = 1

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# The maximum number of backlogged sockets.
# Default is 128.
net.core.somaxconn = 4096

# Turn on syncookies for SYN flood attack protection.
net.ipv4.tcp_syncookies = 1

# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Enable automatic window scaling.
# This will allow the TCP buffer to grow beyond its usual maximum of 64K if the latency justifies it.
net.ipv4.tcp_window_scaling = 1

# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1

# Tells the kernel how many TCP sockets that are not attached to any
# user file handle to maintain. In case this number is exceeded,
# orphaned connections are immediately reset and a warning is printed.
# Default: net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_max_orphans = 65536

# Do not cache metrics on closing connections
net.ipv4.tcp_no_metrics_save = 1

# Enable timestamps as defined in RFC1323:
# Default: net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_timestamps = 1

# Enable select acknowledgments.
# Default: net.ipv4.tcp_sack = 1
net.ipv4.tcp_sack = 1

# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks.
# net.ipv4.tcp_tw_recycle has been removed from Linux 4.12. Use net.ipv4.tcp_tw_reuse instead.
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_reuse = 1

# The accept_source_route option causes network interfaces to accept packets with the Strict Source Route (SSR) or Loose Source Routing (LSR) option set. 
# The following setting will drop packets with the SSR or LSR option set.
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Disable ICMP redirect acceptance
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0

# Disables sending of all IPv4 ICMP redirected packets.
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Disable IP forwarding.
# IP forwarding is the ability for an operating system to accept incoming network packets on one interface, 
# recognize that it is not meant for the system itself, but that it should be passed on to another network, and then forwards it accordingly.
net.ipv4.ip_forward = 0

# Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

################################################################################################
# Tweak kernel parameters
################################################################################################

# Address Space Layout Randomization (ASLR) is a memory-protection process for operating systems that guards against buffer-overflow attacks.
# It helps to ensure that the memory addresses associated with running processes on systems are not predictable,
# thus flaws or vulnerabilities associated with these processes will be more difficult to exploit.
# Accepted values: 0 = Disabled, 1 = Conservative Randomization, 2 = Full Randomization
kernel.randomize_va_space = 2

# Allow for more PIDs (to reduce rollover problems)
kernel.pid_max = 65536
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.11. Reload all &lt;strong&gt;sysctl&lt;/strong&gt; variables without rebooting the server.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

sysctl &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/sysctl.d/00-sysctl.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.12. Configure firewall for &lt;strong&gt;Nginx&lt;/strong&gt; and &lt;strong&gt;Keepalived&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Enable ans start firewalld.service&lt;/span&gt;
systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; firewalld

&lt;span class="c"&gt;# You must allow VRRP traffic to pass between the keepalived nodes&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-rich-rule&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'rule protocol value="vrrp" accept'&lt;/span&gt;

&lt;span class="c"&gt;# Enable Kubernetes API&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6443/tcp

&lt;span class="c"&gt;# Reload firewall rules&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.13. Create Local DNS records.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/hosts &amp;gt; /dev/null
# localhost
127.0.0.1     localhost        localhost.localdomain

# When DNS records are updated in the DNS server, remove these entries.
192.168.16.80  kube-api.example.local
192.168.16.102 kubemaster01  kubemaster01.example.local
192.168.16.103 kubemaster02  kubemaster02.example.local
192.168.16.104 kubemaster03  kubemaster03.example.local
192.168.16.105 kubeworker01  kubeworker01.example.local
192.168.16.106 kubeworker02  kubeworker02.example.local
192.168.16.107 kubeworker03  kubeworker03.example.local
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.14. Configure &lt;em&gt;keepalived&lt;/em&gt; failover on &lt;strong&gt;kubelb01&lt;/strong&gt; and &lt;strong&gt;kubelb02&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Don't forget to change &lt;em&gt;&lt;strong&gt;auth_pass&lt;/strong&gt;&lt;/em&gt; to something more secure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change interface &lt;em&gt;&lt;strong&gt;ens192&lt;/strong&gt;&lt;/em&gt; to match your interface name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change &lt;em&gt;&lt;strong&gt;virtual_ipaddress&lt;/strong&gt;&lt;/em&gt; from 192.168.16.80 to a valid IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;em&gt;&lt;strong&gt;priority&lt;/strong&gt;&lt;/em&gt; specifies the order in which the assigned interface takes over in a failover; the higher the number, the higher the priority.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;3.14.1. Please execute the following command on &lt;strong&gt;kubelb01&lt;/strong&gt; Server.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/keepalived/keepalived.conf &amp;gt; /dev/null
# Global definitions configuration block
global_defs {

    router_id LVS_LB

}

vrrp_instance VI_1 {

    # The state MASTER designates the active server, the state BACKUP designates the backup server.
    state MASTER

    virtual_router_id 100

    # The interface parameter assigns the physical interface name 
    # to this particular virtual IP instance.
    interface ens192

    # The priority specifies the order in which the assigned interface
    # takes over in a failover; the higher the number, the higher the priority.
    # This priority value must be within the range of 0 to 255, and the Load Balancing 
    # server configured as state MASTER should have a priority value set to a higher number 
    # than the priority value of the server configured as state BACKUP.
    priority 150

    advert_int 1

    authentication {

        auth_type PASS

        # Don't forget to change auth_pass to something more secure.
        # auth_pass value MUST be same in both nodes.
        auth_pass Bx3ae3Gr

    }

    virtual_ipaddress {

        192.168.16.80

    }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.14.2. Please execute the following command on &lt;strong&gt;kubelb02&lt;/strong&gt; Server.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/keepalived/keepalived.conf &amp;gt; /dev/null
# Global definitions configuration block
global_defs {

    router_id LVS_LB

}

vrrp_instance VI_1 {

    # The state MASTER designates the active server, the state BACKUP designates the backup server.
    state BACKUP

    virtual_router_id 100

    # The interface parameter assigns the physical interface name 
    # to this particular virtual IP instance.
    interface ens192

    # The priority specifies the order in which the assigned interface
    # takes over in a failover; the higher the number, the higher the priority.
    # This priority value must be within the range of 0 to 255, and the Load Balancing 
    # server configured as state MASTER should have a priority value set to a higher number 
    # than the priority value of the server configured as state BACKUP.
    priority 100

    advert_int 1

    authentication {

        auth_type PASS

        # Don't forget to change auth_pass to something more secure.
        # auth_pass value MUST be same in both nodes.
        auth_pass Bx3ae3Gr

    }

    virtual_ipaddress {

        192.168.16.80

    }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.15. Start and enable &lt;em&gt;keepalived&lt;/em&gt; service on &lt;strong&gt;both&lt;/strong&gt; load balancer nodes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Start and enable keepalived service&lt;/span&gt;
systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; keepalived

&lt;span class="c"&gt;# Check if the keepalived service is running&lt;/span&gt;
systemctl status keepalived


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.16. To determine whether a server is acting as the master, you can use the following command to see whether the virtual address is active.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ip addr show ens192


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.17. Configure nginx on &lt;strong&gt;both&lt;/strong&gt; load balancer nodes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/nginx/nginx.conf &amp;gt; /dev/null
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {

    worker_connections 2048;

}

stream {

    upstream stream_backend {

        # Load balance algorithm
        least_conn;

        # kubemaster01
        server kubemaster01.example.local:6443;

        # kubemaster02
        server kubemaster02.example.local:6443;

        # kubemaster03
        server kubemaster03.example.local:6443;

    }

    server {

        listen                  6443;
        proxy_pass              stream_backend;

        proxy_timeout           300s;
        proxy_connect_timeout   60s;

    }

}
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.18. Start and enable &lt;em&gt;nginx&lt;/em&gt; service on &lt;strong&gt;both&lt;/strong&gt; load balancer nodes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Start and enable nginx service&lt;/span&gt;
systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; nginx

&lt;span class="c"&gt;# Check if the nginx service is running&lt;/span&gt;
systemctl status nginx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.19. The servers need to be restarted before continue further.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

reboot


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.20. Verify the load balancer.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl &lt;span class="nt"&gt;-k&lt;/span&gt; https://kube-api.example.local:6443


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;If the load balancers are working, you should get the following output.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl: &lt;span class="o"&gt;(&lt;/span&gt;35&lt;span class="o"&gt;)&lt;/span&gt; OpenSSL SSL_connect: SSL_ERROR_SYSCALL &lt;span class="k"&gt;in &lt;/span&gt;connection to &lt;span class="o"&gt;[&lt;/span&gt;https://kube-api.example.local:6443]&lt;span class="o"&gt;(&lt;/span&gt;https://kube-api.example.local:6443&lt;span class="o"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt; &lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  4. Install and Configure Kubernetes
&lt;/h2&gt;
&lt;h3&gt;
  
  
  4.1. Install prerequisites on &lt;strong&gt;BOTH&lt;/strong&gt; Master and Worker nodes
&lt;/h3&gt;
&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Verify the &lt;strong&gt;MAC address&lt;/strong&gt; and &lt;strong&gt;product_uuid&lt;/strong&gt; are unique for every node. You can get the MAC address of the network interfaces using the following command.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ip &lt;span class="nb"&gt;link&lt;/span&gt; | &lt;span class="nb"&gt;grep link&lt;/span&gt;/ether


&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;product_uuid&lt;/strong&gt; can be checked using the below command.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; /sys/class/dmi/id/product_uuid


&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Verify the &lt;strong&gt;Linux Kernel&lt;/strong&gt; version is greater than &lt;strong&gt;4.5.0&lt;/strong&gt;. It can be checked using the following command.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker, Rocky Linux 8  and the XFS filesystem could be a trouble giving combination if you don't meet all the specifications of the overlay/overlay2 storage driver.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The overlay storage driver relies on a technology called "directory entry type" (d_type) and is used to describe information of a directory on the filesystem. Make sure you have a &lt;em&gt;d_type&lt;/em&gt; enabled filesystem.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

xfs_info / | &lt;span class="nb"&gt;grep &lt;/span&gt;ftype


&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;ftype&lt;/strong&gt; value must be set to &lt;strong&gt;1&lt;/strong&gt;. If not do not continue further.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;disk.EnableUUID&lt;/strong&gt; parameter must be set to &lt;strong&gt;TRUE&lt;/strong&gt; on all VMware VMs. In order to do this, power off the VMs and go to &lt;strong&gt;Edit Settings&lt;/strong&gt;. Then, navigate to &lt;strong&gt;VM Options -&amp;gt; Advanced -&amp;gt; Edit Configuration&lt;/strong&gt;. Please go through the existing configurations and verify disk.EnableUUID property exists, and its value already set to true. If the disk.EnableUUID property does not exist, add it using &lt;strong&gt;ADD CONFIGURATION PARAMS&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1ovpgg5mnya8azd5hof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1ovpgg5mnya8azd5hof.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
 &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.1.1. Set server hostname.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Example:&lt;/span&gt;
&lt;span class="c"&gt;# hostnamectl set-hostname kubelb01&lt;/span&gt;

hostnamectl set-hostname &amp;lt;&lt;span class="nb"&gt;hostname&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.2. Install prerequisites.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Clean YUM repository cache&lt;/span&gt;
dnf clean all

&lt;span class="c"&gt;# Update packages&lt;/span&gt;
dnf update &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# Install prerequisites&lt;/span&gt;
dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; vim net-tools chrony ntpstat


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.3. Synchronize server time with &lt;strong&gt;Google NTP&lt;/strong&gt; server.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Add Google NTP Server&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/^pool/c\pool time.google.com iburst'&lt;/span&gt; /etc/chrony.conf

&lt;span class="c"&gt;# Set timezone to Asia/Colombo&lt;/span&gt;
timedatectl set-timezone Asia/Colombo

&lt;span class="c"&gt;# Enable NTP time synchronization&lt;/span&gt;
timedatectl set-ntp &lt;span class="nb"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.4. Start and enable &lt;em&gt;chronyd&lt;/em&gt; service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Start and enable chronyd service&lt;/span&gt;
systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; chronyd

&lt;span class="c"&gt;# Check if chronyd service is running&lt;/span&gt;
systemctl status chronyd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.5. Display time synchronization status.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Verify synchronisation state&lt;/span&gt;
ntpstat

&lt;span class="c"&gt;# Check Chrony Source Statistics&lt;/span&gt;
chronyc sourcestats &lt;span class="nt"&gt;-v&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.6. Permanently disable SELinux.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Permanently disable SELinux&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/^SELINUX=enforcing$/SELINUX=disabled/'&lt;/span&gt; /etc/selinux/config


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.7. Enable IP masquerade at the Linux firewall.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Enable IP masquerade at the firewall&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-masquerade&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.8. Disable IPv6 on network interface.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Disable IPv6 on ens192 interface&lt;/span&gt;
nmcli connection modify ens192 ipv6.method ignore


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.9. Execute the following commands to turn off all swap devices and files.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Permanently disable swapping&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/ swap / s/^/#/'&lt;/span&gt; /etc/fstab

&lt;span class="c"&gt;#d Disable all existing swaps from /proc/swaps&lt;/span&gt;
swapoff &lt;span class="nt"&gt;-a&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.10. Enable auto-loading of required kernel modules.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Enable auto-loading of required kernel modules&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/modules-load.d/crio.conf &amp;gt; /dev/null
overlay
br_netfilter
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Add overlay and br_netfilter kernel modules to the Linux kernel&lt;/span&gt;
&lt;span class="c"&gt;# The br_netfilter kernel modules will enable transparent masquerading and facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster&lt;/span&gt;
modprobe overlay
modprobe br_netfilter


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.11. Disable &lt;strong&gt;File Access Time Logging&lt;/strong&gt; and enable &lt;strong&gt;Combat Fragmentation&lt;/strong&gt; to enhance XFS file system performance. Add &lt;code&gt;noatime,nodiratime,allocsize=64m&lt;/code&gt; to all XFS volumes under &lt;code&gt;/etc/fstab&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Edit /etc/fstab&lt;/span&gt;
vim /etc/fstab

&lt;span class="c"&gt;# Modify XFS volume entries as follows&lt;/span&gt;
&lt;span class="c"&gt;# Example:&lt;/span&gt;
&lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"03c97344-9b3d-45e2-9140-cbbd57b6f085"&lt;/span&gt;  /  xfs  defaults,noatime,nodiratime,allocsize&lt;span class="o"&gt;=&lt;/span&gt;64m  0 0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.12. Tweaking the system for high concurrancy and security.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/sysctl.d/00-sysctl.conf &amp;gt; /dev/null
#############################################################################################
# Tweak virtual memory
#############################################################################################

# Default: 30
# 0 - Never swap under any circumstances.
# 1 - Do not swap unless there is an out-of-memory (OOM) condition.
vm.swappiness = 1

# vm.dirty_background_ratio is used to adjust how the kernel handles dirty pages that must be flushed to disk.
# Default value is 10.
# The value is a percentage of the total amount of system memory, and setting this value to 5 is appropriate in many situations.
# This setting should not be set to zero.
vm.dirty_background_ratio = 5

# The total number of dirty pages that are allowed before the kernel forces synchronous operations to flush them to disk
# can also be increased by changing the value of vm.dirty_ratio, increasing it to above the default of 30 (also a percentage of total system memory)
# vm.dirty_ratio value in-between 60 and 80 is a reasonable number.
vm.dirty_ratio = 60

# vm.max_map_count will calculate the current number of memory mapped files.
# The minimum value for mmap limit (vm.max_map_count) is the number of open files ulimit (cat /proc/sys/fs/file-max).
# map_count should be around 1 per 128 KB of system memory. Therefore, max_map_count will be 262144 on a 32 GB system.
# Default: 65530
vm.max_map_count = 2097152

#############################################################################################
# Tweak file handles
#############################################################################################

# Increases the size of file handles and inode cache and restricts core dumps.
fs.file-max = 2097152
fs.suid_dumpable = 0

#############################################################################################
# Tweak network settings
#############################################################################################

# Default amount of memory allocated for the send and receive buffers for each socket.
# This will significantly increase performance for large transfers.
net.core.wmem_default = 25165824
net.core.rmem_default = 25165824

# Maximum amount of memory allocated for the send and receive buffers for each socket.
# This will significantly increase performance for large transfers.
net.core.wmem_max = 25165824
net.core.rmem_max = 25165824

# In addition to the socket settings, the send and receive buffer sizes for
# TCP sockets must be set separately using the net.ipv4.tcp_wmem and net.ipv4.tcp_rmem parameters.
# These are set using three space-separated integers that specify the minimum, default, and maximum sizes, respectively.
# The maximum size cannot be larger than the values specified for all sockets using net.core.wmem_max and net.core.rmem_max.
# A reasonable setting is a 4 KiB minimum, 64 KiB default, and 2 MiB maximum buffer.
net.ipv4.tcp_wmem = 20480 12582912 25165824
net.ipv4.tcp_rmem = 20480 12582912 25165824

# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 25165824 262144
net.ipv4.udp_mem = 65536 25165824 262144

# Minimum amount of memory allocated for the send and receive buffers for each socket.
net.ipv4.udp_wmem_min = 16384
net.ipv4.udp_rmem_min = 16384

# Enabling TCP window scaling by setting net.ipv4.tcp_window_scaling to 1 will allow
# clients to transfer data more efficiently, and allow that data to be buffered on the broker side.
net.ipv4.tcp_window_scaling = 1

# Increasing the value of net.ipv4.tcp_max_syn_backlog above the default of 1024 will allow
# a greater number of simultaneous connections to be accepted.
net.ipv4.tcp_max_syn_backlog = 10240

# Increasing the value of net.core.netdev_max_backlog to greater than the default of 1000
# can assist with bursts of network traffic, specifically when using multigigabit network connection speeds,
# by allowing more packets to be queued for the kernel to process them.
net.core.netdev_max_backlog = 65536

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824

# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2

# Allowed local port range.
net.ipv4.ip_local_port_range = 2048 65535

# Protect Against TCP Time-Wait
# Default: net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rfc1337 = 1

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# The maximum number of backlogged sockets.
# Default is 128.
net.core.somaxconn = 4096

# Turn on syncookies for SYN flood attack protection.
net.ipv4.tcp_syncookies = 1

# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Enable automatic window scaling.
# This will allow the TCP buffer to grow beyond its usual maximum of 64K if the latency justifies it.
net.ipv4.tcp_window_scaling = 1

# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1

# Tells the kernel how many TCP sockets that are not attached to any
# user file handle to maintain. In case this number is exceeded,
# orphaned connections are immediately reset and a warning is printed.
# Default: net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_max_orphans = 65536

# Do not cache metrics on closing connections
net.ipv4.tcp_no_metrics_save = 1

# Enable timestamps as defined in RFC1323:
# Default: net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_timestamps = 1

# Enable select acknowledgments.
# Default: net.ipv4.tcp_sack = 1
net.ipv4.tcp_sack = 1

# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks.
# net.ipv4.tcp_tw_recycle has been removed from Linux 4.12. Use net.ipv4.tcp_tw_reuse instead.
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_reuse = 1

# The accept_source_route option causes network interfaces to accept packets with the Strict Source Route (SSR) or Loose Source Routing (LSR) option set. 
# The following setting will drop packets with the SSR or LSR option set.
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Disable ICMP redirect acceptance
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0

# Disables sending of all IPv4 ICMP redirected packets.
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

#############################################################################################
# Kubernetes related settings
#############################################################################################

# Enable IP forwarding.
# IP forwarding is the ability for an operating system to accept incoming network packets on one interface,
# recognize that it is not meant for the system itself, but that it should be passed on to another network, and then forwards it accordingly.
net.ipv4.ip_forward = 1

# These settings control whether packets traversing a network bridge are processed by iptables rules on the host system.
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# To prevent Linux conntrack table is out of space, increase the conntrack table size.
# This setting is for Calico networking.
net.netfilter.nf_conntrack_max = 1000000

#############################################################################################
# Tweak kernel parameters
#############################################################################################

# Address Space Layout Randomization (ASLR) is a memory-protection process for operating systems that guards against buffer-overflow attacks.
# It helps to ensure that the memory addresses associated with running processes on systems are not predictable,
# thus flaws or vulnerabilities associated with these processes will be more difficult to exploit.
# Accepted values: 0 = Disabled, 1 = Conservative Randomization, 2 = Full Randomization
kernel.randomize_va_space = 2

# Allow for more PIDs (to reduce rollover problems)
kernel.pid_max = 65536
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.13. Reload all &lt;em&gt;sysctl&lt;/em&gt; variables without rebooting the server.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

sysctl &lt;span class="nt"&gt;--system&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.14. Create Local DNS records.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/hosts &amp;gt; /dev/null
# localhost
127.0.0.1     localhost        localhost.localdomain

# When DNS records are updated in the DNS server, remove these entries.
192.168.16.80  kube-api.example.local
192.168.16.102 kubemaster01  kubemaster01.example.local
192.168.16.103 kubemaster02  kubemaster02.example.local
192.168.16.104 kubemaster03  kubemaster03.example.local
192.168.16.105 kubeworker01  kubeworker01.example.local
192.168.16.106 kubeworker02  kubeworker02.example.local
192.168.16.107 kubeworker03  kubeworker03.example.local
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.15. Configure &lt;em&gt;NetworkManager&lt;/em&gt; before attempting to use Calico networking.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create the following configuration file to prevent NetworkManager from interfering with the interfaces&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/NetworkManager/conf.d/calico.conf &amp;gt; /dev/null
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.16. The servers need to be restarted before continue further.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

reboot


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.17. Configure &lt;em&gt;CRI-O&lt;/em&gt; Container Runtime Interface repositories.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;The CRI-O major and minor versions must match the Kubernetes major and minor versions. For more information, see the &lt;a href="https://github.com/cri-o/cri-o" rel="noopener noreferrer"&gt;CRI-O compatibility matrix&lt;/a&gt;.&lt;br&gt;
 &lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Set environment variables according to the operating system and Kubernetes version&lt;/span&gt;
&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;CentOS_8
&lt;span class="nv"&gt;VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.19

&lt;span class="c"&gt;# Configure YUM repositories&lt;/span&gt;
curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/&lt;span class="nv"&gt;$OS&lt;/span&gt;/devel:kubic:libcontainers:stable.repo
curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:&lt;span class="nv"&gt;$VERSION&lt;/span&gt;.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:&lt;span class="nv"&gt;$VERSION&lt;/span&gt;/&lt;span class="nv"&gt;$OS&lt;/span&gt;/devel:kubic:libcontainers:stable:cri-o:&lt;span class="nv"&gt;$VERSION&lt;/span&gt;.repo


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.18. Install &lt;em&gt;CRI-O&lt;/em&gt; package.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Install cri-o package&lt;/span&gt;
dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; cri-o


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.19. Start and enable CRI-O service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Start and enable crio service&lt;/span&gt;
systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; crio

&lt;span class="c"&gt;# Check if the crio service is running&lt;/span&gt;
systemctl status crio


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.20. Add Kubernetes repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/yum.repos.d/kubernetes.repo &amp;gt; /dev/null
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.21. Install &lt;em&gt;kubeadm&lt;/em&gt;, &lt;em&gt;kubelet&lt;/em&gt; and &lt;em&gt;kubectl&lt;/em&gt; packages.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

dnf &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--disableexcludes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubernetes kubelet-1.19&lt;span class="k"&gt;*&lt;/span&gt; kubeadm-1.19&lt;span class="k"&gt;*&lt;/span&gt; kubectl-1.19&lt;span class="k"&gt;*&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.1.22. Pull latest docker images used by kubeadm.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubeadm config images pull


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  4.2. Configure &lt;strong&gt;MASTER&lt;/strong&gt; nodes
&lt;/h3&gt;
&lt;h4&gt;
  
  
  4.2.1. Prepare Master Nodes
&lt;/h4&gt;

&lt;p&gt;4.2.1.1. Open necessary firewall ports used by Kubernetes.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Open necessary firewall ports&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;6443,2379,2380,10250,10251,10252&lt;span class="o"&gt;}&lt;/span&gt;/tcp

&lt;span class="c"&gt;# Allow docker access from another node&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-rich-rule&lt;/span&gt; &lt;span class="s1"&gt;'rule family=ipv4 source address=192.168.16.0/24 accept'&lt;/span&gt;

&lt;span class="c"&gt;# Apply firewall changes&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.1.2. Configure runtime cgroups used by &lt;em&gt;kubelet&lt;/em&gt; service.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;For Kubernetes versions 1.9.x and above your vsphere.conf file should be placed only on the Kubernetes &lt;strong&gt;MASTER&lt;/strong&gt; nodes.&lt;br&gt;
 &lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Configure runtime cgroups used by kubelet on ALL master nodes&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/sysconfig/kubelet &amp;gt; /dev/null
KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --cloud-provider=vsphere --cloud-config=/etc/kubernetes/vsphere.conf"
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.1.3. Enable &lt;em&gt;kubelet&lt;/em&gt; service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;kubelet


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.1.4. Create vSphere config file.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Please make sure to change vSphere config values as appropriate. For details about how this file can be constructed, refer the &lt;a href="https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html" rel="noopener noreferrer"&gt;VMware Documentation&lt;/a&gt;&lt;br&gt;
 &lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/kubernetes/vsphere.conf &amp;gt; /dev/null
[Global]
secret-name = "vmware"
secret-namespace = "kube-system"
port = "443"
insecure-flag = "1"

[VirtualCenter "vcenter.example.local"]
datacenters = "MAIN"

[Workspace]
server = "vcenter.example.local"
datacenter = "MAIN"
default-datastore = "MAIN-DS"
resourcepool-path = "MAIN/Resources"
folder = "Kubernetes"

[Disk]
scsicontrollertype = pvscsi
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  4.2.2. Configure the First Master Node (&lt;strong&gt;kubemaster01&lt;/strong&gt;)
&lt;/h4&gt;

&lt;p&gt;4.2.2.1. Create the &lt;strong&gt;kubeadm&lt;/strong&gt; config file.&lt;/p&gt;
&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Please make sure to change &lt;strong&gt;controlPlaneEndpoint&lt;/strong&gt; value as appropriate.&lt;br&gt;
 &lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/kubernetes/kubeadm.conf &amp;gt; /dev/null
---
apiServer:
  extraArgs:
    cloud-config: /etc/kubernetes/vsphere.conf
    cloud-provider: vsphere
    endpoint-reconciler-type: lease
  extraVolumes:
  - hostPath: /etc/kubernetes/vsphere.conf
    mountPath: /etc/kubernetes/vsphere.conf
    name: cloud
  timeoutForControlPlane: 4m0s
---
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: kube-api.example.local:6443
controllerManager:
  extraArgs:
    cloud-config: /etc/kubernetes/vsphere.conf
    cloud-provider: vsphere
  extraVolumes:
  - hostPath: /etc/kubernetes/vsphere.conf
    mountPath: /etc/kubernetes/vsphere.conf
    name: cloud
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
networking:
  dnsDomain: example.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.2.2. Initialize the first control plane.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubeadm init &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--config&lt;/span&gt; /etc/kubernetes/kubeadm.conf &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--upload-certs&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;h4&gt;
  
  
  🟢 You will get an output like this. Please make sure to record &lt;strong&gt;MASTER&lt;/strong&gt; and &lt;strong&gt;WORKER&lt;/strong&gt; join commands.
&lt;/h4&gt;

&lt;p&gt;Your Kubernetes control-plane has initialized successfully!&lt;/p&gt;

&lt;p&gt;To start using your cluster, you need to run the following as a regular user:&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/addons/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/concepts/cluster-administration/addons/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now join any number of the control-plane node running the following command on each as root:&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;kube-api.example.local:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; ti2ho7.t146llqa4sn8y229 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:9e73a021b8b26c8a2fc04939729acc7670769f15469887162cdbae923df906f9 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--control-plane&lt;/span&gt; &lt;span class="nt"&gt;--certificate-key&lt;/span&gt; d9d631a0aef1a5a474faa6787b54814040adf1012c6c1922e8fe096094547b65 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.&lt;/p&gt;

&lt;p&gt;Then you can join any number of worker nodes by running the following on each as root:&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;kube-api.example.local:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; ti2ho7.t146llqa4sn8y229 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:9e73a021b8b26c8a2fc04939729acc7670769f15469887162cdbae923df906f9 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.2.2.3. To start using kubectl, you need to run the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.2.4. Create a Kubernetes secret with the base64 encoded vCenter credentials.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In order to create a Kubernetes secret, we must generate base64 encoded credentials. Please make sure &lt;strong&gt;NOT&lt;/strong&gt; to use special characters in vCenter password.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let's assume your vCenter username is &lt;strong&gt;&lt;a href="mailto:kubernetes@vsphere.local"&gt;kubernetes@vsphere.local&lt;/a&gt;&lt;/strong&gt; and the password is &lt;strong&gt;KuB3VMwar3&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate base64 encoded username using &lt;code&gt;echo -n 'kubernetes@vsphere.local' | base64&lt;/code&gt;. Please make sure to record the output (Example output: a3ViZXJuZXRlc0B2c3BoZXJlLmxvY2Fs).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate base64 encoded password using &lt;code&gt;echo -n 'KuB3VMwar3' | base64&lt;/code&gt;. Please make sure to record the output (Example output: S3VCM1ZNd2FyMw==).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can refer &lt;a href="https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/k8s-secret.html" rel="noopener noreferrer"&gt;Securing vSphere Credentials&lt;/a&gt; section for more details&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | kubectl create -f -
apiVersion: v1
kind: Secret
metadata:
 name: vmware
 namespace: kube-system
type: Opaque
data:
   vcenter.example.local.username: a3ViZXJuZXRlc0B2c3BoZXJlLmxvY2Fs
   vcenter.example.local.password: S3VCM1ZNd2FyMw==
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.2.5. Install calico CNI-plugin.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Install calico CNI-plugin&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://docs.projectcalico.org/manifests/calico.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.2.6. Check &lt;em&gt;NetworkReady&lt;/em&gt; status. It must be &lt;strong&gt;TRUE&lt;/strong&gt;. If not, wait some time and check it again.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Check NetworkReady status&lt;/span&gt;
watch crictl info


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.2.7. Watch the ods created in the &lt;em&gt;kube-system&lt;/em&gt; namespace and make sure all are running.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Watch the Pods created in the kube-system namespace&lt;/span&gt;
watch kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt; kube-system


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.2.8. Check master node status.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Check master node status&lt;/span&gt;
kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.2.9. Verify ProviderID has been added the nodes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# You should get an output like below&lt;/span&gt;
&lt;span class="c"&gt;# ProviderID: vsphere://4204a018-f286-cf3c-7f2d-c512d9f7d90d&lt;/span&gt;
kubectl describe nodes | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"ProviderID"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  4.2.3. Configure other master nodes (&lt;strong&gt;kubemaster02&lt;/strong&gt; and &lt;strong&gt;kubemaster03&lt;/strong&gt;).
&lt;/h4&gt;
&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Make sure to join other master nodes &lt;strong&gt;ONE BY ONE&lt;/strong&gt; when the &lt;strong&gt;kubemaster01&lt;/strong&gt; status becomes &lt;strong&gt;READY&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Before execute the &lt;em&gt;kubectl join&lt;/em&gt; command, make sure to verify all pods are up and running using the below command.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get po,svc &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;--v=5&lt;/code&gt; argument with &lt;em&gt;kubeadm join&lt;/em&gt; in order to get a verbose output.
 &lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.2.3.1. Execute the &lt;strong&gt;control-plane join&lt;/strong&gt; command recorded in step &lt;em&gt;&lt;strong&gt;4.2.2.2&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Control plane join command example:&lt;/span&gt;
kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;kube-api.example.local:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; ti2ho7.t146llqa4sn8y229 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:9e73a021b8b26c8a2fc04939729acc7670769f15469887162cdbae923df906f9 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--control-plane&lt;/span&gt; &lt;span class="nt"&gt;--certificate-key&lt;/span&gt; d9d631a0aef1a5a474faa6787b54814040adf1012c6c1922e8fe096094547b65 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.3.2. To start using kubectl, you need to run the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.2.3.3. Check master node status.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Check master node status&lt;/span&gt;
kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  4.3. Configure &lt;strong&gt;WORKER&lt;/strong&gt; nodes
&lt;/h3&gt;
&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Make sure to join worker nodes &lt;strong&gt;ONE BY ONE&lt;/strong&gt; when the &lt;strong&gt;MASTER&lt;/strong&gt; nodes status becomes &lt;strong&gt;READY&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Before execute the &lt;em&gt;kubectl join&lt;/em&gt; command on worker nodes, make sure to verify all pods are up and running on master nodes using the below command.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get po,svc &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;--v=5&lt;/code&gt; argument with &lt;em&gt;kubeadm join&lt;/em&gt; in order to get a verbose output.
 &lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.3.1. Open necessary firewall ports used by Kubernetes.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Open necessary firewall ports&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;10250,30000-32767&lt;span class="o"&gt;}&lt;/span&gt;/tcp

&lt;span class="c"&gt;# Apply firewall changes&lt;/span&gt;
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.3.2. Configure runtime cgroups used by &lt;em&gt;kubelet&lt;/em&gt; service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Configure runtime cgroups used by kubelet&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/sysconfig/kubelet &amp;gt; /dev/null
KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --cloud-provider=vsphere"
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.3.3. Enable &lt;em&gt;kubelet&lt;/em&gt; service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;kubelet


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.3.4. Execute the &lt;strong&gt;worker nodes join&lt;/strong&gt; command recorded in step &lt;em&gt;&lt;strong&gt;4.2.2.2&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Worker node join command example:&lt;/span&gt;
kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;kube-api.example.local:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; ti2ho7.t146llqa4sn8y229 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:9e73a021b8b26c8a2fc04939729acc7670769f15469887162cdbae923df906f9 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  4.4. Configure &lt;strong&gt;MetalLB Load Balancer&lt;/strong&gt;
&lt;/h3&gt;
&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You &lt;strong&gt;MUST&lt;/strong&gt; execute these commands on a &lt;strong&gt;MASTER&lt;/strong&gt; node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure to follow these steps only when the both &lt;strong&gt;MASTER&lt;/strong&gt; and &lt;strong&gt;WORKER&lt;/strong&gt; nodes status becomes &lt;strong&gt;READY&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure to execute &lt;code&gt;kubectl get po,svc --all-namespaces&lt;/code&gt; on a master node and verify all pods are up and running.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.4.1. Install MetalLB Load Balancer.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Install MetalLB Load Balancer&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.4.2. Create MetalLB ConfigMap.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create MetalLB ConfigMap&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2

      # MetalLB IP Pool
      addresses:
      - 192.168.16.200-192.168.16.250
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.4.3. Watch the Pods created in the &lt;em&gt;metallb-system&lt;/em&gt; namespace and make sure all are running.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Watch the Pods created in the metallb-system namespace&lt;/span&gt;
watch kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt; metallb-system


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  🟣 NOTE
&lt;/h3&gt;

&lt;p&gt;If you want to change the MetalLB IP Pool, please follow these steps.&lt;/p&gt;

&lt;p&gt;1. Note the old IPs allocated to services.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get svc &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;2. Delete the old ConfigMap.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system delete cm config


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;3. Apply the new ConfigMap&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
 config: |
    address-pools:
    - name: default
      protocol: layer2

      # MetalLB IP Pool
      addresses:
      - 192.168.16.150-192.168.16.175
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;4. Delete the existing MetalLB pods.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system delete pod &lt;span class="nt"&gt;--all&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;5. New MetalLB pods will be created automatically. Please make sure the pods are running.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; metallb-system get pods


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;6. Inspect new IPs of services.&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl get svc &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  4.5. Configure &lt;strong&gt;Kubernetes Dashboard&lt;/strong&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You &lt;strong&gt;MUST&lt;/strong&gt; execute these commands on a &lt;strong&gt;MASTER&lt;/strong&gt; node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure to follow these steps only when the both &lt;strong&gt;MASTER&lt;/strong&gt; and &lt;strong&gt;WORKER&lt;/strong&gt; nodes status becomes &lt;strong&gt;READY&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure to execute &lt;code&gt;kubectl get po,svc --all-namespaces&lt;/code&gt; on a master node and verify all pods are up and running.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.5.1. Install Kubernetes Dashboard.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Install Kubernetes Dashboard&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.2. Create the Dashboard service account.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create the Dashboard service account&lt;/span&gt;
&lt;span class="c"&gt;# This will create a service account named dashboard-admin in the default namespace&lt;/span&gt;
kubectl create serviceaccount dashboard-admin &lt;span class="nt"&gt;--namespace&lt;/span&gt; kubernetes-dashboard


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.3. Bind the dashboard-admin service account to the cluster-admin role.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Bind the dashboard-admin service account to the cluster-admin role&lt;/span&gt;
kubectl create clusterrolebinding dashboard-admin &lt;span class="nt"&gt;--clusterrole&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster-admin &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--serviceaccount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubernetes-dashboard:dashboard-admin


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.4. When we created the &lt;strong&gt;dashboard-admin&lt;/strong&gt; service account, Kubernetes also created a secret for it. List secrets using the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# When we created the dashboard-admin service account Kubernetes also created a secret for it.&lt;/span&gt;
&lt;span class="c"&gt;# List secrets using:&lt;/span&gt;
kubectl get secrets &lt;span class="nt"&gt;--namespace&lt;/span&gt; kubernetes-dashboard


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.5. Get &lt;strong&gt;Dashboard Access Token&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# We can see the dashboard-admin-sa service account secret in the above command output.&lt;/span&gt;
&lt;span class="c"&gt;# Use kubectl describe to get the access token:&lt;/span&gt;
kubectl describe &lt;span class="nt"&gt;--namespace&lt;/span&gt; kubernetes-dashboard secret dashboard-admin-token


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.6. Watch Pods and Service accounts under kubernetes-dashboard namespace.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Watch Pods and Service accounts under kubernetes-dashboard&lt;/span&gt;
watch kubectl get po,svc &lt;span class="nt"&gt;--namespace&lt;/span&gt; kubernetes-dashboard


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.7. Get logs of kubernetes-dashboard.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Get logs of kubernetes-dashboard&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;--follow&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; kubernetes-dashboard deployment/kubernetes-dashboard


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.8. Create kubernetes-dashboard load balancer.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create kubernetes-dashboard load balancer&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: load-balancer-dashboard
  name: dashboard-load-balancer
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      protocol: TCP
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: LoadBalancer
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.9. Get logs of kubernetes-dashboard.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Get logs of kubernetes-dashboard&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;--follow&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; kubernetes-dashboard deployment/kubernetes-dashboard


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.5.10. Get kubernetes-dashboard &lt;strong&gt;External IP&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Get kubernetes-dashboard external IP&lt;/span&gt;
kubectl get po,svc &lt;span class="nt"&gt;--namespace&lt;/span&gt; kubernetes-dashboard | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; service/dashboard-load-balancer


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  4.6. Create vSphere Storage Class
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create vSphere storage class&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;' | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: vsphere
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/vsphere-volume
parameters:
    datastore: KUBE-DS
    diskformat: thin
    fstype: xfs
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  4.7. Deploy a &lt;strong&gt;Sample WordPress Blog&lt;/strong&gt;
&lt;/h3&gt;
&lt;h3&gt;
  
  
  🔵 Important
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You &lt;strong&gt;MUST&lt;/strong&gt; execute these commands on a &lt;strong&gt;MASTER&lt;/strong&gt; node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure to follow these steps only when the both &lt;strong&gt;MASTER&lt;/strong&gt; and &lt;strong&gt;WORKER&lt;/strong&gt; nodes status becomes &lt;strong&gt;READY&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure to execute &lt;code&gt;kubectl get po,svc --all-namespaces&lt;/code&gt; on a master node and verify all pods are up and running.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.7.1. Deploy a sample WordPress application using rook persistent volume claim.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create a MySQL container&lt;/span&gt;
kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://notebook.yasithab.com/gist/vsphere-mysql.yaml

&lt;span class="c"&gt;# Create an Apache WordPress container&lt;/span&gt;
kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://notebook.yasithab.com/gist/vsphere-wordpress.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.7.2. Get detailed information about the Persistent Volumes (PV) used by the application.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Describe PersistentVolume&lt;/span&gt;
kubectl describe pv


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.7.3. Describe Persistant Volume Claims (PVC) used by the application.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Describe mysql persistance volume claim&lt;/span&gt;
kubectl describe pvc mysql-pv-claim

&lt;span class="c"&gt;# Describe wordpress persistance volume claim&lt;/span&gt;
kubectl describe pvc wp-pv-claim


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  4.8. Clean up Kubernates
&lt;/h3&gt;
&lt;h3&gt;
  
  
  🔴 Caution
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The following commands are used to &lt;strong&gt;RESET&lt;/strong&gt; your nodes and &lt;strong&gt;WIPE OUT&lt;/strong&gt; all components installed.&lt;/li&gt;
&lt;li&gt;You &lt;strong&gt;MUST&lt;/strong&gt; run this on &lt;strong&gt;ALL&lt;/strong&gt; &lt;strong&gt;MASTER&lt;/strong&gt; and &lt;strong&gt;WORKER&lt;/strong&gt; nodes.
 &lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.8.1. Remove Kubernetes Components from Nodes&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d&lt;/span&gt;
&lt;span class="c"&gt;# The reset process does not reset or clean up iptables rules or IPVS tables.&lt;/span&gt;
&lt;span class="c"&gt;# If you wish to reset iptables, you must do so manually by using the "iptables" command.&lt;/span&gt;
&lt;span class="c"&gt;# If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.&lt;/span&gt;

&lt;span class="c"&gt;# Remove Kubernetes Components from Nodes&lt;/span&gt;
kubeadm reset &lt;span class="nt"&gt;--force&lt;/span&gt;

&lt;span class="c"&gt;# The reset process does not clean your kubeconfig files and you must remove them manually&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;If you liked the post, then you may purchase my first cup of coffee ever, thanks in advance :)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/yasithab" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.buymeacoffee.com%2Fbuttons%2Fdefault-orange.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="http://dockerlabs.collabnix.com/kubernetes/beginners/Install-and-configure-a-multi-master-Kubernetes-cluster-with-kubeadm.html" rel="noopener noreferrer"&gt;Install and configure a multi-master Kubernetes cluster with kubeadm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kubeclusters.com/docs/How-to-Deploy-a-Highly-Available-kubernetes-Cluster-with-Kubeadm-on-CentOS7" rel="noopener noreferrer"&gt;How to Deploy a HA Kubernetes Cluster with kubeadm on CentOS7&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/velotio-perspectives/demystifying-high-availability-in-kubernetes-using-kubeadm-3d83ed8c458b" rel="noopener noreferrer"&gt;Demystifying High Availability in Kubernetes Using Kubeadm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://octetz.com/docs/2019/2019-03-26-ha-control-plane-kubeadm/" rel="noopener noreferrer"&gt;Highly Available Control Plane with kubeadm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.inkubate.io/install-and-configure-a-multi-master-kubernetes-cluster-with-kubeadm/" rel="noopener noreferrer"&gt;Install and configure a multi-master Kubernetes cluster with kubeadm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://labs.consol.de/kubernetes/2018/05/25/kubeadm-backup.html" rel="noopener noreferrer"&gt;HA Cluster vs. Backup/Restore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jordyverbeek.nl/nieuws/kubernetes-ha-cluster-installation-guide" rel="noopener noreferrer"&gt;Kubernetes HA Cluster installation guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability" rel="noopener noreferrer"&gt;Creating Highly Available clusters with kubeadm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/" rel="noopener noreferrer"&gt;Deploy Kubernetes on vSphere&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html" rel="noopener noreferrer"&gt;vSphere Cloud Provider Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://rook.io/docs/rook/v0.6/kubernetes.html" rel="noopener noreferrer"&gt;Rook on Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.definit.co.uk/2019/06/lab-guide-kubernetes-and-storage-with-the-vsphere-cloud-provider-step-by-step/" rel="noopener noreferrer"&gt;Lab Guide - Kubernetes and Storage With the Vsphere Cloud Provider - Step by Step&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.inkubate.io/use-vsphere-storage-as-kubernetes-persistant-volumes/" rel="noopener noreferrer"&gt;Use vSphere Storage as Kubernetes persistent volumes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html" rel="noopener noreferrer"&gt;Dynamic Provisioning and StorageClass API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://rook.io/docs/rook/v1.0/ceph-teardown.html" rel="noopener noreferrer"&gt;ROOK - Teardown Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.objectif-libre.com/en/blog/2019/06/11/metallb/" rel="noopener noreferrer"&gt;What You Need to Know About MetalLB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://metallb.universe.tf/configuration/" rel="noopener noreferrer"&gt;MetalLB Layer 2 Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="noopener noreferrer"&gt;Bare-metal considerations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d" rel="noopener noreferrer"&gt;Kubernetes Ingress 101: NodePort, Load Balancers, and Ingress Controllers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="noopener noreferrer"&gt;Ingress Controllers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cormachogan.com/2019/06/18/kubernetes-storage-on-vsphere-101-failure-scenarios/" rel="noopener noreferrer"&gt;Kubernetes Storage on vSphere 101 – Failure Scenarios&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cormachogan.com/2019/10/10/moving-a-stateful-app-from-vcp-to-csi-based-kubernetes-cluster-using-velero/" rel="noopener noreferrer"&gt;Moving a Stateful App from VCP to CSI based Kubernetes cluster using Velero&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ibm.com/support/knowledgecenter/en/SSYGQH_6.0.0/admin/install/cp_prereq_kubernetes_dns.html" rel="noopener noreferrer"&gt;Verifying that DNS is working correctly within your Kubernetes platform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noopener noreferrer"&gt;Debugging DNS Resolution&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cri-o.io" rel="noopener noreferrer"&gt;CRI-O&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes" rel="noopener noreferrer"&gt;Container Runtimes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://prog.world/cri-o-as-a-replacement-for-docker-as-the-runtime-for-kubernetes-setting-up-on-centos-8" rel="noopener noreferrer"&gt;CRI-O as a replacement for Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://upcloud.com/community/tutorials/install-kubernetes-cluster-centos-8" rel="noopener noreferrer"&gt;How to install Kubernetes cluster on CentOS 8&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>vmware</category>
      <category>linux</category>
    </item>
    <item>
      <title>Understanding Infrastructure as Code (IaC)</title>
      <dc:creator>Yasitha Bogamuwa</dc:creator>
      <pubDate>Wed, 15 Jun 2022 13:29:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/understanding-infrastructure-as-code-iac-1bhg</link>
      <guid>https://dev.to/aws-builders/understanding-infrastructure-as-code-iac-1bhg</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;This article discusses the benefits of implementing infrastructure as code (IaC), which is a methodology that enables organizations to manage and provision their IT infrastructure through code. IaC provides a number of advantages over traditional methods such as increased speed and accuracy, improved consistency, and easier scalability. Additionally, IaC can help reduce the amount of manual effort required to maintain infrastructure, which can lead to cost savings and increased efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Infrastructure as Code?
&lt;/h2&gt;

&lt;p&gt;When most people think of infrastructure, they think of the physical things that make up a company’s or organization’s IT environment. Servers, storage, networking gear, and so on. However, the definition of infrastructure has been changing over the past few years to include not just the physical components but also the software that controls them. This new category of software is often referred to as Infrastructure as Code (IaC).&lt;/p&gt;

&lt;p&gt;Before the era of infrastructure as code, changes to an organization’s IT infrastructure were made manually. This process was often time-consuming, error-prone, and difficult to track. In many cases, it also resulted in duplicate work and inconsistency across environments.&lt;/p&gt;

&lt;p&gt;In recent years, the use of Infrastructure as Code (IaC) has gained popularity in the DevOps community. IaC is the process of managing and provisioning infrastructure using code, typically in a declarative language such as YAML or JSON. IaC enables users to version-control their infrastructure, treat it like software, and deploy it in the same way as they would their application code. This allows for more repeatable, predictable and scalable infrastructure changes, and improve the consistency and reliability of the system. Let’s explore some of the benefits of using IaC, and how you can get started using it in your own projects. &lt;/p&gt;

&lt;h2&gt;
  
  
  Types of IaC
&lt;/h2&gt;

&lt;p&gt;There are many different types of IaC, each with its own advantages and disadvantages. IaC tools like Terraform and Pulumi allow DevOps to write code that describes the desired state of their infrastructure. This code can be checked in to source control, allowing for collaboration and versioning. The IaC tools then automatically configure the resources to match the desired state. This results in more reliable and repeatable deployments, and allows for faster iteration on infrastructure changes. &lt;/p&gt;

&lt;p&gt;Let’s explore two of the most common open-source types of IaC: Tettaform, and Pulumi. Terraform and Pulumi are both infrastructure as code tools that allow you to declaratively define your infrastructure. However, there are some key differences. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;br&gt;
Terraform is HashiCorp's infrastructure as code tool. It can be used to manage both public and private cloud providers, as well as traditional data center infrastructure. However, Terraform is more mature and has a larger user base. Additionally, Terraform has better support for provider integrations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pulumi&lt;/strong&gt;&lt;br&gt;
Pulumi is a newer tool, and as a result, has more features. It also allows you to use familiar general-purpose languages like Python, TypeScript, JavaScript, Go, .NET, Java, and markup languages like YAML, which can be useful for certain use cases. &lt;/p&gt;

&lt;p&gt;For more information, please refer to Pulumi docs &lt;a href="https://www.pulumi.com/docs/intro/vs/terraform/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of IaC
&lt;/h2&gt;

&lt;p&gt;There are many benefits to using Infrastructure as Code (IaC) for businesses. Some of the key benefits include improving efficiency and agility, reducing costs, eliminating configuration drift, and improving security. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It helps businesses improve efficiency and agility by allowing them to quickly and easily deploy new infrastructure components or make changes to existing infrastructure. This can help businesses keep up with changing business requirements and keep their systems running smoothly. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It also helps businesses reduce costs by automating the deployment process. This can save time and money by eliminating the need for manual labor to set up new servers or make changes to existing infrastructure. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It improves security by ensuring that all systems are configured in a consistent manner. This can help prevent unauthorized access to systems and data, and minimize the risk of system failures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It eliminates configuration drift. Configuration drift is a common challenge that organizations face when it comes to managing their infrastructure. As new servers and applications are added, or as existing ones are updated, the configuration of the systems can change in ways that are not always tracked or accounted for. This can lead to an inconsistent system state, with different servers running different versions of software, or with incorrect settings that can undermine performance or security. With IaC, all aspects of the infrastructure are defined and controlled in code, rather than being configured manually. This enables automated testing and verification of the infrastructure, as well as reproducibility and predictability. By eliminating the possibility of configuration drift, IaC can help organizations maintain a stable and consistent system state.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Declarative vs Imperative
&lt;/h2&gt;

&lt;p&gt;The terms imperative and declarative come up often in IaC discussions. Both terms refer to how the user provides instructions to the automation platform. The imperative approach specifies the exact commands required to achieve the desired configuration, and those commands then need to be executed in the correct order. The declarative approach defines the desired state of the system, and the IaC tool determines how to achieve that state.&lt;/p&gt;

&lt;p&gt;The debate between declarative and imperative approaches to IaC can be contentious. Proponents of the declarative approach argue that it is more maintainable and scalable. Imperative proponents counter that the declarative approach can be verbose and less flexible.&lt;/p&gt;

&lt;p&gt;Many IaC tools follow the declarative approach and will automatically provision the desired infrastructure for you. If you alter the desired state, a declarative IaC tool will apply any changes for you. An imperative tool will require you to figure out those changes and apply accordingly.&lt;/p&gt;

&lt;p&gt;Ultimately, the best approach depends on your specific needs. In my experience, the best method to IaC is to use declarative definition files where possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  So far so good. But every coin has two sides!
&lt;/h2&gt;

&lt;p&gt;We spend a lot of our time underscoring the benefits of infrastructure as code. It all works. Treating infrastructure as code can add a lot of value in terms of its ability to encourage consistent, predictable, and repeatable results in the delivery of application software throughout the process of infrastructure provision and deployment. But it doesn't mean treating infrastructure as code has no downside. Because adopting a programmatic approach means you reap both the advantages and the drawbacks that come along with it.&lt;/p&gt;

&lt;p&gt;On average, developers find and fix about &lt;strong&gt;100 bugs per 1000 lines of code&lt;/strong&gt;. Discovering one of these bugs takes 30 times longer than writing a single line. Developers typically spend about &lt;strong&gt;75% of their time debugging code&lt;/strong&gt; (about 1,500 hours per year). That’s where code review comes in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IdIq2xCd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jdznpjato7y6kxxjx8qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IdIq2xCd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jdznpjato7y6kxxjx8qu.png" alt="Image description" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A code review is the process of examining software code with the intent of finding defects, or potential problems with the design. The code review is often done as a pair programming exercise, where two developers work on the same code at the same time. &lt;/p&gt;

&lt;p&gt;I won’t lie, code review can be frustrating at the start, but as soon as someone points an error that could have caused a serious security breach or an undesirable service outage, the initial pain is thoroughly compensated for. In the long run, code reviews are useful for minimizing disruptive errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of a Code Review
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It helps to find defects in the code. By having another set of eyes look at the code, you are more likely to find problems that you may have missed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It helps to improve the overall quality of the code. By finding and fixing defects early in the development process, you prevent them from becoming bigger problems later on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reviewing code also helps to ensure that the code is consistent and meets the standards of the organization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It helps to improve communication between team members. By working together on code, team members learn how to communicate better and work more effectively together. Also, it provides an opportunity for developers to learn from each other and improve their skills.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Learn some Best Practices
&lt;/h2&gt;

&lt;p&gt;As companies move to the cloud and adopt Infrastructure as Code (IaC) practices, they need to consider a number of critical success factors. The first step is understanding the IaC process and how it can be used to your advantage. Configuration management is key to managing your infrastructure in a reproducible way, while automation helps reduce the time and effort needed to manage your systems. Additionally, you’ll need to make sure your team has the proper training and tools in order to take advantage of IaC. By following these best practices, you can ensure that your company is getting the most out of its IaC investment.&lt;/p&gt;

&lt;p&gt;Determined by IaC best practices, let me offer you a few tips for creating an effective IaC strategy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Make sure to code everything in your infrastructure setup, and keep that file as your single source of truth. Whenever you need to confirm a setting, explore the layouts, or create any other configuration change, use the configuration files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Always version control your configuration files. This should be self-evident, but put your config files under source control. Stick to little documentation (or none at all) for infrastructure settings. Since your configuration files should be your main source of truth, there's no need to store more documentation in external files. External documentation can easily become misaligned with the real configurations, but that is never going to happen with your config files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test your configuration. IaC is code, like other forms of code, it can be tested. By utilizing testing and monitoring tools for IaC such as Checkov, and TFSec, it is possible to check errors, vulnerabilities, and inconsistencies in your code before you deploy them to production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Take your time and review the code!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, infrastructure as code offers many benefits to businesses. It can help improve efficiency, save money, and reduce the risk of human error. By using infrastructure as code, businesses can automate the process of creating and managing their infrastructure. This can help them improve their bottom line and become more competitive in the market.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you liked the post, then you may purchase my first cup of coffee ever, thanks in advance :)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/yasithab"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Oibfu3K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" width="434" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;1. &lt;a href="https://stackify.com/what-is-infrastructure-as-code-how-it-works-best-practices-tutorials/"&gt;What Is Infrastructure as Code? How It Works, Best Practices, Tutorials&lt;/a&gt;&lt;br&gt;
2. &lt;a href="https://docs.microsoft.com/en-us/devops/deliver/what-is-infrastructure-as-code"&gt;What is Infrastructure as Code?&lt;/a&gt;&lt;br&gt;
3. &lt;a href="https://www.linode.com/blog/devops/declarative-vs-imperative-in-iac/"&gt;Declarative vs. Imperative in IaC&lt;/a&gt;&lt;br&gt;
4. &lt;a href="https://medium.com/swlh/4-rules-of-thumb-for-providing-effective-code-review-feedback-bb188864f50d"&gt;Four Rules of Thumb for Providing Effective Code Review Feedback&lt;/a&gt;&lt;br&gt;
5. &lt;a href="https://devops.com/dark-side-infrastructure-code/"&gt;The Dark Side of Infrastructure as Code&lt;/a&gt;&lt;/p&gt;

</description>
      <category>iac</category>
      <category>terraform</category>
      <category>pulumi</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Create an Amazon Machine Image (AMI) from a FreePBX Virtual Machine (VM)</title>
      <dc:creator>Yasitha Bogamuwa</dc:creator>
      <pubDate>Tue, 14 Jun 2022 09:14:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-an-amazon-machine-image-ami-from-a-freepbx-virtual-machine-vm-3ok3</link>
      <guid>https://dev.to/aws-builders/create-an-amazon-machine-image-ami-from-a-freepbx-virtual-machine-vm-3ok3</guid>
      <description>&lt;p&gt;Creating an Amazon Machine Image (AMI) from a FreePBX Virtual Machine (VM) is a quick and easy way to create a ready-to-use virtual machine that, once configured, can be used to run your own PBX. This guide will show you how to create an AMI from a VM running the FreePBX open source software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Make sure to install and configure &lt;strong&gt;AWS Command Line Interface&lt;/strong&gt; in your host computer. You can find the instructions &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Please use an &lt;strong&gt;IAM user with administrator privileges&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This has been tested on &lt;strong&gt;VMware Workstation 15 Professional&lt;/strong&gt; edition.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;1. Download the latest FreePBX Distro from &lt;a href="https://www.freepbx.org/downloads/"&gt;here&lt;/a&gt; and install it on VMware Workstation.&lt;/p&gt;

&lt;p&gt;2. SSH into the instance and install the following packages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; cloud-init cloud-utils-growpart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Modify the &lt;code&gt;/etc/cloud/cloud.cfg&lt;/code&gt; file as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;system_info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;asterisk&lt;/span&gt;
    &lt;span class="na"&gt;lock_passwd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;gecos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Asterisk User&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;wheel&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;adm&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;systemd-journal&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;sudo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ALL=(ALL)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;NOPASSWD:ALL"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
  &lt;span class="na"&gt;distro&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rhel&lt;/span&gt;
  &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cloud_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/cloud&lt;/span&gt;
    &lt;span class="na"&gt;templates_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/cloud/templates&lt;/span&gt;
  &lt;span class="na"&gt;ssh_svcname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sshd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4. Change the &lt;em&gt;/etc/ssh/sshd_config&lt;/em&gt; file as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;PasswordAuthentication no
PermitRootLogin no
UseDNS no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5. Shutdown the VM and export it as an &lt;strong&gt;OVA&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0OHryB2S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://res.cloudinary.com/yasithab/image/upload/v1655203091/blog/freepbx_vba2ai.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0OHryB2S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://res.cloudinary.com/yasithab/image/upload/v1655203091/blog/freepbx_vba2ai.gif" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6. Create an S3 bucket and upload the exported OVA file either using AWS CLI or an S3 Client. I used &lt;strong&gt;Cyberduck S3 Client&lt;/strong&gt; and it is freely available [here (&lt;a href="https://cyberduck.io/download/"&gt;https://cyberduck.io/download/&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;7. You'll need to create the following policy documents. Make sure to change &lt;strong&gt;S3 bucket&lt;/strong&gt; and &lt;strong&gt;OVA file&lt;/strong&gt; name based on your configurations. In this example, S3 Bucket name and OVA file name will be &lt;strong&gt;ami-storage&lt;/strong&gt; and &lt;strong&gt;FreePBX.ova&lt;/strong&gt;, respectively.&lt;/p&gt;

&lt;p&gt;  7.A. Create &lt;strong&gt;trust-policy.json&lt;/strong&gt;. This will be used to create the &lt;strong&gt;vmimport&lt;/strong&gt; IAM role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{&lt;/span&gt;
   &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Version"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2012-10-17"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
   &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Statement"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
      &lt;span class="pi"&gt;{&lt;/span&gt;
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Effect"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Allow"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Principal"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Service"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vmie.amazonaws.com"&lt;/span&gt; &lt;span class="pi"&gt;},&lt;/span&gt;
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts:AssumeRole"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Condition"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;StringEquals"&lt;/span&gt;&lt;span class="pi"&gt;:{&lt;/span&gt;
               &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts:Externalid"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vmimport"&lt;/span&gt;
            &lt;span class="pi"&gt;}&lt;/span&gt;
         &lt;span class="pi"&gt;}&lt;/span&gt;
      &lt;span class="pi"&gt;}&lt;/span&gt;
   &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create vmimport IAM role&lt;/span&gt;
aws iam create-role &lt;span class="nt"&gt;--role-name&lt;/span&gt; vmimport &lt;span class="nt"&gt;--assume-role-policy-document&lt;/span&gt; file://trust-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;  7.B. Create &lt;strong&gt;role-policy.json&lt;/strong&gt;. This will be used to assign necessary IAM policies to the vmimport role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{&lt;/span&gt; 
   &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Version"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2012-10-17"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
   &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Statement"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; 
      &lt;span class="pi"&gt;{&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Effect"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Allow"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3:ListBucket"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3:GetBucketLocation"&lt;/span&gt; 
         &lt;span class="pi"&gt;],&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Resource"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:s3:::ami-storage"&lt;/span&gt; 
         &lt;span class="pi"&gt;]&lt;/span&gt; 
      &lt;span class="pi"&gt;},&lt;/span&gt; 
      &lt;span class="pi"&gt;{&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Effect"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Allow"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3:GetObject"&lt;/span&gt; 
         &lt;span class="pi"&gt;],&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Resource"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:s3:::ami-storage/*"&lt;/span&gt; 
         &lt;span class="pi"&gt;]&lt;/span&gt; 
      &lt;span class="pi"&gt;},&lt;/span&gt; 
      &lt;span class="pi"&gt;{&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Effect"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Allow"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action"&lt;/span&gt;&lt;span class="pi"&gt;:[&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ec2:ModifySnapshotAttribute"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ec2:CopySnapshot"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ec2:RegisterImage"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ec2:Describe*"&lt;/span&gt; 
         &lt;span class="pi"&gt;],&lt;/span&gt; 
         &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Resource"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt; 
      &lt;span class="pi"&gt;}&lt;/span&gt; 
   &lt;span class="pi"&gt;]&lt;/span&gt; 
&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create and assign necessary IAM policies to the vmimport role&lt;/span&gt;
aws iam put-role-policy &lt;span class="nt"&gt;--role-name&lt;/span&gt; vmimport &lt;span class="nt"&gt;--policy-name&lt;/span&gt; vmimport &lt;span class="nt"&gt;--policy-document&lt;/span&gt; file://role-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;  7.C. Create &lt;strong&gt;containers.json&lt;/strong&gt;. This will be used to generate an AMI from the uploaded OVA.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;[&lt;/span&gt; 
  &lt;span class="pi"&gt;{&lt;/span&gt; 
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Description"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FreePBX"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Format"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ova"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;UserBucket"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; 
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;S3Bucket"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ami-storage"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; 
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;S3Key"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FreePBX.ova"&lt;/span&gt; 
    &lt;span class="pi"&gt;}&lt;/span&gt; 
  &lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate an AMI from the uploaded OVA&lt;/span&gt;
aws ec2 import-image &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"FreePBX"&lt;/span&gt; &lt;span class="nt"&gt;--license-type&lt;/span&gt; BYOL &lt;span class="nt"&gt;--disk-containers&lt;/span&gt; file://containers.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;  7.D. The previous task can range in estimated completion from 15 to 60 minutes. You can check its progress with the following command by replacing the &lt;strong&gt;ImportTaskId&lt;/strong&gt; shown in the above command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ec2 describe-import-image-tasks &lt;span class="nt"&gt;--import-task-ids&lt;/span&gt; import-ami-0b900a870c359a58f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;  7.E. The task will remain active with &lt;code&gt;"StatusMessage": "pending"&lt;/code&gt; until it reaches completion. The &lt;code&gt;"Progress"&lt;/code&gt; attribute will indicate the percentage of work made up to that point. Once the state switches to &lt;code&gt;"completed"&lt;/code&gt; and the previous command gives additional information about the conversion of the image to AMI format, you will be provided with a new AMI available in the same region where you creted the S3 bucket. It can be used to provision a FreePBX EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you liked the post, then you may purchase my first cup of coffee ever, thanks in advance :)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/yasithab"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Oibfu3K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" width="434" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;1. &lt;a href="https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html"&gt;Importing a VM as an Image Using VM Import/Export&lt;/a&gt;&lt;br&gt;
2. &lt;a href="http://www.daniloaz.com/en/how-to-create-a-sentilo-aws-ec2-instance-from-an-ova-file/"&gt;How to create a Sentilo AWS EC2 instance from an OVA file&lt;/a&gt;&lt;br&gt;
3. &lt;a href="https://dribbble.com/shots/11875910-FreePBX-Tango-Scenes-Titanic/attachments/3501168?mode=media"&gt;FreePBX Tango Scenes by Crisy Meschieri&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>freepbx</category>
      <category>devops</category>
      <category>vmware</category>
    </item>
    <item>
      <title>Introduction to DevOps</title>
      <dc:creator>Yasitha Bogamuwa</dc:creator>
      <pubDate>Tue, 14 Jun 2022 04:38:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/introduction-to-devops-3m5g</link>
      <guid>https://dev.to/aws-builders/introduction-to-devops-3m5g</guid>
      <description>&lt;h2&gt;
  
  
  What does DevOps mean to You?
&lt;/h2&gt;

&lt;p&gt;DevOps is a term that has been around for a few years now, and its exact definition can be a little fuzzy. But at its core, DevOps is about breaking down the barriers between development and operations teams in order to improve collaboration and productivity. The goal is to create a more agile and responsive organization that can quickly adapt to changing business needs.&lt;/p&gt;

&lt;p&gt;The DevOps movement has gained a lot of traction in recent years, and many organizations are looking for the right tools to help them implement DevOps practices.&lt;/p&gt;

&lt;p&gt;There are many different models for implementing DevOps in your organization. But the most common approach is to use a combination of automation tools, process improvement techniques, and culture change strategies. Automation tools like Jenkins or TeamCity can help you streamline your build and deployment process, while process improvement techniques like Kanban or Scrum can help you optimize your workflow. And culture change strategies like training and education can help you create a more collaborative and productive work environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Adopt a DevOps Model
&lt;/h2&gt;

&lt;p&gt;When an organization begins to adopt a DevOps model, it must first determine what specific tools and applications will be used.  Many organizations find success by using open-source DevOps tools such as Git, Jenkins, SonarQube, Ansible, and Terraform.  In order to use these tools effectively, the organization must also make changes to their development and operations processes.&lt;/p&gt;

&lt;p&gt;One of the most important changes is the introduction of automated testing into the software development process. This can help to ensure that new code is error-free and can be safely deployed into production. Automated testing can also help to speed up the development process by identifying problems early on in the cycle.&lt;/p&gt;

&lt;p&gt;Another key change is the adoption of continuous integration (CI) and continuous delivery (CD).&lt;/p&gt;

&lt;p&gt;CI helps to ensure that changes made to code are properly integrated into the rest of the codebase. It also allows for earlier detection of errors and issues.&lt;/p&gt;

&lt;p&gt;CD helps to improve the quality of the software being released, to reduce the amount of time between when a change is made to the code and when that change is available to users, and to allow for more frequent feedback from users.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can DevOps help your Business?
&lt;/h2&gt;

&lt;p&gt;The goal of DevOps is to make sure that the system works as efficiently as possible while still being able to meet customer demands.&lt;/p&gt;

&lt;p&gt;In addition to the traditional roles of developers and operations professionals, DevOps brings together people from other parts of the organization, such as Quality Assurance (QA) and Business Analysts (BA). The goal is to break down silos within the organization and to improve communication and collaboration between different groups.&lt;/p&gt;

&lt;p&gt;Some benefits of using DevOps include shorter wait times for new features, increased reliability, shorten the feedback loop, increase the speed of software delivery and improved communication. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shorter Release Cycles&lt;/strong&gt;&lt;br&gt;
Shorter release cycles are possible because DevOps encourages breaking down the barriers between development and operations teams. This collaboration results in developers being able to quickly get feedback from operations on how their code is actually working in production and make necessary changes. As a result, there is less waste of time and effort due to misunderstandings or integration issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Quality&lt;/strong&gt;&lt;br&gt;
Increased quality is another key benefit of DevOps. With tight collaboration between developers and operators, issues are caught and fixed much earlier in the process which leads to higher-quality products. In addition, automated testing can be used more extensively with DevOps to further improve product quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Security&lt;/strong&gt;&lt;br&gt;
Increased security is another advantage of DevOps. By integrating security into the development process from the outset, vulnerabilities can be identified and addressed before they become a problem. And by automating many of the manual tasks involved in deploying software, DevOps makes it easier to enforce security policies and track compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps Practices Explained!
&lt;/h2&gt;

&lt;p&gt;DevOps practices are designed to increase the efficiency and effectiveness of software development by integrating the functions of development, operations, and reliability engineering. &lt;/p&gt;

&lt;p&gt;There are six key tenets of DevOps: Communication and Collaboration, Continuous Integration, Continuous Delivery, Configuration Management, Infrastructure as Code, and Monitoring and Logging. These principles help to improve coordination between teams, speed up feedback loops, and reduce risk.&lt;/p&gt;

&lt;p&gt;The benefits of DevOps practices are clear: faster time to market, improved quality and reliability, and reduced costs. Let's look at this in more detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Communication and Collaboration
&lt;/h3&gt;

&lt;p&gt;Employees who feel heard and connected to their team are more productive. In order for businesses to thrive in the current economy, communication and collaboration are essential. By creating an open dialogue, employees feel valued and are able to contribute their best ideas. When team members collaborate, they can produce better results by sharing knowledge and pooling resources.&lt;/p&gt;

&lt;p&gt;Businesses can benefit greatly from communication and collaboration. First, communication allows for a clear understanding of the goals and objectives of the project. With everyone on the same page, there is less chance for confusion and miscommunication. Second, collaboration enables team members to work together to come up with creative solutions to problems. This can lead to a more efficient and successful project. Finally, communication and collaboration help build trust among team members. This trust is essential for a successful DevOps initiative. When team members trust one another, they are more likely to be open and honest with each other, which leads to better results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Integration (CI)
&lt;/h3&gt;

&lt;p&gt;Continuous integration, or CI, is a software development practice that enables developers to integrate their code into a shared repository several times a day. This helps to ensure that the code is always in a working state and reduces the chances of introducing errors into the codebase.&lt;/p&gt;

&lt;p&gt;There are several business benefits of using continuous integration: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fewer bugs in the codebase&lt;/strong&gt;&lt;br&gt;
By integrating code multiple times a day, developers are able to catch and fix errors earlier in the development process. This leads to fewer defects in the final product.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shorter turnaround times&lt;/strong&gt;&lt;br&gt;
With a properly implemented CI system, developers can submit their changes for testing and approval quickly and easily. This leads to faster turnaround times for new features and bug fixes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Delivery (CD)
&lt;/h3&gt;

&lt;p&gt;In today's business world, it's essential to be able to quickly and efficiently release new features and updates to your product. This is where continuous delivery comes in. CD is a process that allows you to release software updates in a more automated way, which can lead to faster turnaround times and increased efficiency.&lt;/p&gt;

&lt;p&gt;There are many benefits of using CD for your business. One of the biggest advantages is that it enables you to push out changes more quickly and easily. This means you can get new features and updates into the hands of your customers faster, which can give you a competitive edge. Additionally, CD can help improve the quality of your product by catching errors earlier in the development cycle. &lt;/p&gt;

&lt;p&gt;Another benefit of CD is that it helps reduce downtime and increase reliability. When releases are done in a more automated way, there is less chance for something to go wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration Management (CM)
&lt;/h3&gt;

&lt;p&gt;Configuration management (CM) is a term for the broad category of processes and technologies that help organizations manage and maintain the state of their IT infrastructure. CM tools allow organizations to define, track, and change the configurations of their systems in a controlled manner.&lt;/p&gt;

&lt;p&gt;There are many different CM tools on the market, but some of the most popular ones are Ansible, Puppet, and Chef. Each tool has its own strengths and weaknesses, but they all share one common goal: to help organizations manage their IT infrastructure more effectively.&lt;br&gt;
There are many business benefits to using a CM tool. Perhaps the most obvious benefit is that it can help organizations save time and money by automating tasks that would otherwise have to be done manually. CM tools can also help organizations better adhere to compliance regulations, improve system reliability and uptime, and reduce the risk of human error.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure as Code (IaC)
&lt;/h3&gt;

&lt;p&gt;In the world of business, time is money. Every minute wasted on manual processes means less time to focus on the company's core competencies and goals. This is where Infrastructure as Code (IaC) comes in, offering a way to manage IT infrastructure through code-based automation. &lt;/p&gt;

&lt;p&gt;IAC tools such as Terraform allow businesses to treat their infrastructure as a product that can be version controlled, tested, and deployed like any other application. Additionally, IaC tools can be used in conjunction with other tools such as Puppet or Chef, allowing you to automate even more of your infrastructure management tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging and Monitoring
&lt;/h3&gt;

&lt;p&gt;Businesses today realize the importance of monitoring and logging their systems. By doing so, they can detect issues early and prevent them from becoming bigger problems. Additionally, businesses can use the data collected by monitoring and logging to improve their systems.&lt;/p&gt;

&lt;p&gt;One benefit of monitoring is that businesses can detect issues early. If a problem is detected early, it can often be fixed before it causes any damage. This saves time and money, as well as reduces the risk of data loss or system failure.&lt;/p&gt;

&lt;p&gt;Another benefit of monitoring is that businesses can use the data collected to improve their systems. For example, if a business notices that a particular process is causing errors, they can investigate why that is happening and make changes to fix the issue. By doing this, businesses can make their systems more efficient and reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a DevOps Pipeline?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q_MNo89h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy0yvpob8m2ajfcusp3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q_MNo89h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy0yvpob8m2ajfcusp3x.png" alt="Image description" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A DevOps pipeline is a sequence of automated tasks that helps to simplify and speed up the process of software development. The pipeline begins with the collection of code from a repository and ends with the deployment of the software. In between, various tasks are performed, such as compiling code, running tests, and packaging the software.&lt;/p&gt;

&lt;p&gt;The advantages of using a DevOps pipeline are many. First, it helps to ensure that all changes are made in a consistent manner. Second, it allows for quick and easy feedback on whether changes have caused any problems. Third, it enables automation of many tasks that would otherwise have to be done manually. And fourth, it promotes collaboration between developers and operations staff.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the most popular tools in the DevOps domain?
&lt;/h2&gt;

&lt;p&gt;There are a variety of tools that are used in the DevOps domain. Some of the most popular tools include;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version Control Tools such as GitLab, GitHub and Bitbucket.&lt;/li&gt;
&lt;li&gt;Build Tools like Maven.&lt;/li&gt;
&lt;li&gt;Continuous Integration Tools like Jenkins, and TeamCity.&lt;/li&gt;
&lt;li&gt;Continuous Delivery Tools including Argo CD, and Go CD.&lt;/li&gt;
&lt;li&gt;Infrastructure as Code Tools like Terraform, Pulumi, and CloudFormation.&lt;/li&gt;
&lt;li&gt;Code Quality and Code Security Inspection Tools like SonarQube.&lt;/li&gt;
&lt;li&gt;Configuration Management Tools such as Puppet, Chef, and Ansible.&lt;/li&gt;
&lt;li&gt;Container Platforms like Docker and Kubernetes.&lt;/li&gt;
&lt;li&gt;Vulnerability Scanners including Aqua Security Trivy, and Clair.&lt;/li&gt;
&lt;li&gt;Communication and Collaboration Tools like Slack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools allow for automation of tasks and help to improve collaboration between developers and IT ops professionals.&lt;/p&gt;

&lt;p&gt;In conclusion, DevOps is a beneficial process for businesses. By implementing DevOps, businesses can improve their communication, collaboration, and productivity. Additionally, DevOps can help businesses become more agile and responsive to changes in the market. As a result, businesses that implement DevOps are more likely to be successful in the long run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you liked the post, then you may purchase my first cup of coffee ever, thanks in advance :)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/yasithab"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Oibfu3K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" width="434" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/devops/what-is-devops/"&gt;What is DevOps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hackernoon.com/what-does-devops-mean-to-you-2d38f4af3ded"&gt;What does DevOps mean to you?&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Convert Your AWS EC2 Linux PV Instance into an HVM Instance</title>
      <dc:creator>Yasitha Bogamuwa</dc:creator>
      <pubDate>Sun, 15 May 2022 13:01:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/convert-your-aws-ec2-linux-pv-instance-into-hvm-instance-3dic</link>
      <guid>https://dev.to/aws-builders/convert-your-aws-ec2-linux-pv-instance-into-hvm-instance-3dic</guid>
      <description>&lt;p&gt;Linux Amazon Machine Images use one of two types of virtualization: &lt;strong&gt;paravirtual (PV)&lt;/strong&gt; or &lt;strong&gt;hardware virtual machine (HVM)&lt;/strong&gt;. The main differences between PV and HVM AMIs are the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance.&lt;/p&gt;

&lt;p&gt;This post will discuss the steps for converting an EC2 Linux instance from PV to HVM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A PV based instance, HVM instance (needs to be created) and a conversion instance (needs to be created).&lt;/li&gt;
&lt;li&gt;Snapshots of the EBS volume(s) of the instance to be converted.&lt;/li&gt;
&lt;li&gt;You'll need to &lt;strong&gt;snapshot the volume&lt;/strong&gt; of the existing PV instance. Ideally the snapshot is taken with the instance in a stopped state.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;You may also have an instance store volume. During a stop action any data on the instance store will be lost.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;1. Create an &lt;strong&gt;HVM instance&lt;/strong&gt;, using as &lt;strong&gt;close as possible your current OS&lt;/strong&gt;, (i.e., make sure the kernel versions match).&lt;/p&gt;

&lt;p&gt;    1.A. Create the HVM instance in the same Availability Zone (AZ) as your PV instance (eu-west-1b) and with the same size EBS volume 15GB.&lt;/p&gt;

&lt;p&gt;    1.B. Once it has started and is running you can stop it. Tag the Volume with something like &lt;strong&gt;OS-HVM&lt;/strong&gt; and detach it from the HVM instance.&lt;/p&gt;

&lt;p&gt;2. Start a &lt;strong&gt;conversion&lt;/strong&gt; instance wait for it to fully start. (It is important that the instance is fully started before attaching the other volumes)&lt;/p&gt;

&lt;p&gt;    2.A. SSH into the conversion instance.&lt;/p&gt;

&lt;p&gt;    2.B. Attach the HVM volume to the conversion instance as &lt;em&gt;/dev/xvdf&lt;/em&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cd&lt;/span&gt; /    
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /hvm
&lt;span class="nb"&gt;sudo &lt;/span&gt;mount /dev/xvdf1 /hvm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;The following 2.C, 2.D and 2.E steps may not be needed on all Linux versions, however it is recommended.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;    2.C. Ensure you keep the HVM boot directory by moving it to the tmp directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo mv&lt;/span&gt; /hvm/boot  /tmp/boot.hvm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;    2.D. Empty out the rest of the drive with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-Rf&lt;/span&gt; /hvm/&lt;span class="k"&gt;*&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;    2.E. Verify the drive is now empty with:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-al&lt;/span&gt; /hvm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3. With the snapshot of the Volume from your PV instance you'll want to &lt;strong&gt;Create Volume&lt;/strong&gt; in the EC2 console. Keep the current volume size and create it in the same AZ (eu-west-1b). Find the new EBS volume and tag it &lt;strong&gt;OS-PV&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;    3.A. Attach it to the &lt;strong&gt;conversion&lt;/strong&gt; instance as &lt;em&gt;/dev/xvdg&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /pv
&lt;span class="nb"&gt;sudo &lt;/span&gt;mount /dev/xvdg /pv


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;    3.B. Verify that it is mounted and has the correct file structure.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# You should see the boot and other root directories.&lt;/span&gt;
&lt;span class="nb"&gt;sudo ls&lt;/span&gt; /pv


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;    3.C. Then copy the contents of /pv to /hvm&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; /pv/&lt;span class="k"&gt;*&lt;/span&gt; /hvm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4. Change the boot directory to be HVM based.&lt;/p&gt;

&lt;p&gt;    4.A. Remove the PV boot directory and replace it with the HVM boot directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; /hvm/boot


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;    4.B. Copy back the saved HVM boot directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo mv&lt;/span&gt; /tmp/boot.hvm  /hvm/boot


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;5. Verify there are now all the needed directories and files on the HVM volume with.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-al&lt;/span&gt; /hvm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;6. You can now unmount the hvm volume.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo &lt;/span&gt;umount /hvm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;7. Find the &lt;strong&gt;OS-HVM&lt;/strong&gt; volume and detach it from the conversion instance.&lt;/p&gt;

&lt;p&gt;    7.A. Attach it back to the HVM instance as device &lt;em&gt;/dev/xvda&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;    7.B. Start the HVM instance.&lt;/p&gt;

&lt;p&gt;8. Clean up the conversion instance and any unwanted or left over volumes/snapshots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you liked the post, then you may purchase my first cup of coffee ever, thanks in advance :)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/yasithab" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.buymeacoffee.com%2Fbuttons%2Fdefault-orange.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html" rel="noopener noreferrer"&gt;AWS Linux AMI Virtualization Types&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>linux</category>
      <category>hvm</category>
    </item>
  </channel>
</rss>
