<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: bharatrajtj</title>
    <description>The latest articles on DEV Community by bharatrajtj (@bharatrajtj).</description>
    <link>https://dev.to/bharatrajtj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bharatrajtj"/>
    <language>en</language>
    <item>
      <title>Python Basics: Functions</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Sat, 11 May 2024 15:17:19 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/python-basics-functions-4833</link>
      <guid>https://dev.to/bharatrajtj/python-basics-functions-4833</guid>
      <description>&lt;p&gt;Functions in Python serve the dual purpose of enhancing code readability and promoting reusability. Each function encapsulates specific logic, facilitating easier debugging processes.&lt;/p&gt;

&lt;p&gt;Below is a basic Python program demonstrating the use of functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Declaring Global Variables
a = 50
b = 20

def add():               # Defining a function named 'add'
    add_result = a + b   # Defining logic for addition
    print(add_result)    # Printing the result of addition

def sub():                # Defining a function named 'sub'
    sub_result = a - b    # Defining logic for subtraction
    print(sub_result)     # Printing the result of subtraction

def mul():                # Defining a function named 'mul'
    mul_result = a * b    # Defining logic for multiplication
    print(mul_result)     # Printing the result of multiplication

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we execute the above code, no output will be printed.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqm1qw9w47al0g6qp124.png" alt="Image description" width="800" height="1244"&gt;
&lt;/h1&gt;

&lt;p&gt;In this Python script, to execute the defined functions, we need to call them explicitly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Declaring Global Variables
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Declaring Global Variables
a = 50
b = 20

def add():               # Define function named 'add'
    result = a + b      # Define logic
    print(result)       # Output the result

def sub():               # Define function named 'sub'
    result = a - b       # Define logic
    print(result)        # Output the result

def mul():               # Define function named 'mul'
    result = a * b       # Define logic
    print(result)        # Output the result

add()  # Call add function
sub()  # Call sub function
mul()  # Call mul function

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we run this script, it will explicitly call each function (add(), sub(), mul()) in sequence, producing the respective outputs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnnm0ywdv7dnah31s3s8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnnm0ywdv7dnah31s3s8.png" alt="Image description" width="800" height="1305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we want to assign specific values to each function instead of relying on global variables, we can define parameters for each function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def add(a, b):               # Define function named 'add' with parameters 'a' and 'b'
    result = a + b           # Define logic
    return result            # Return the output

def sub(a, b):               # Define function named 'sub' with parameters 'a' and 'b'
    result = a - b           # Define logic
    return result            # Return the output

def mul(a, b):               # Define function named 'mul' with parameters 'a' and 'b'
    result = a * b           # Define logic
    return result            # Return the output

print(add(5, 6))              # Call add function with values 5 and 6 and print the result
print(sub(18, 7))             # Call sub function with values 18 and 7 and print the result
print(mul(5, 8))              # Call mul function with values 5 and 8 and print the result

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this script, each function (add(), sub(), mul()) accepts two parameters (a and b) representing the values to be operated on. When we call each function with specific values, it returns the respective outputs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxg87ou9piw1mj343g6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxg87ou9piw1mj343g6.png" alt="Image description" width="800" height="1304"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>devops</category>
      <category>basic</category>
    </item>
    <item>
      <title>Jenkins CICD</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Mon, 18 Mar 2024 00:33:05 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/jenkins-cicd-3f9l</link>
      <guid>https://dev.to/bharatrajtj/jenkins-cicd-3f9l</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Provision an EC2 instance on AWS, opting for the t2.large configuration that supports 2 CPUs and 8GB of RAM. This ensures ample computational power and memory resources to support our Jenkins project's requirements effectively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknf2dcah1hi3ml7zszi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknf2dcah1hi3ml7zszi4.png" alt="Image description" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Clone the repository from its Git source located at &lt;a href="https://github.com/bharatrajtj/jenkins"&gt;https://github.com/bharatrajtj/jenkins&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5roykeelvu245nd3jtk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5roykeelvu245nd3jtk.png" alt="Image description" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Java, a prerequisite for Jenkins' operation. This ensures the availability of the Java Runtime Environment (JRE), essential for executing Jenkins and its associated processes smoothly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqrznh504lemdrb4se3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqrznh504lemdrb4se3o.png" alt="Image description" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Execute the Jenkins installation script to commence the setup process. This script initiates the installation procedure, configuring Jenkins' core components and preparing the environment for subsequent configuration steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihc31lxgadgcgvrcie3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihc31lxgadgcgvrcie3t.png" alt="Image description" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By default, Jenkins operates its HTTP service on port 8080. This configuration allows for access to the Jenkins web interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw2tcx7d9pa65dj4f5y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw2tcx7d9pa65dj4f5y1.png" alt="Image description" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establish inbound traffic rules within the AWS security group associated with your EC2 instance to facilitate communication on port 8080. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl62rzvq8yh4161igdnc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl62rzvq8yh4161igdnc0.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access the initial administrative password for Jenkins by executing the command cat /var/lib/jenkins/secrets/initialAdminPassword within your EC2 terminal. This step retrieves the required credential, enabling you to proceed with the initial setup and configuration of your Jenkins instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21jy0yy1qn1zpuujxgu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21jy0yy1qn1zpuujxgu2.png" alt="Image description" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxwm20vhv5bz25ef41p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxwm20vhv5bz25ef41p6.png" alt="Image description" width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upon completing the installation of the required plugins and finalizing the setup process, you will be directed to the Jenkins Dashboard. This central hub serves as the control panel for managing your Jenkins environment, offering access to various features and functionalities essential for orchestrating automation tasks and pipelines effectively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzrtswe0ck3gt2gqdhyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzrtswe0ck3gt2gqdhyd.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the "New Item" option within the Jenkins interface to initiate the creation of a new Jenkins Pipeline. Opt for the Pipeline project type to configure and manage your pipeline's workflow, facilitating streamlined automation and continuous integration within your development environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76kql0bkmm5s2kstd5k7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76kql0bkmm5s2kstd5k7.png" alt="Image description" width="800" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose "Pipeline script from SCM" as the configuration option for your Jenkins Pipeline. Given that Git serves as our Source Code Manager (SCM) for this project, specify the repository URL as &lt;a href="https://github.com/bharatrajtj/jenkins"&gt;https://github.com/bharatrajtj/jenkins&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcuzpcsymws99ogoetjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcuzpcsymws99ogoetjm.png" alt="Image description" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specify the "Script Path" as the directory path within the repository where the Jenkinsfile is located.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyk9f318f7crfn00eh05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyk9f318f7crfn00eh05.png" alt="Image description" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to "Manage Jenkins" &amp;gt; "Manage Plugins" &amp;gt; "Available" tab, and search for the "Docker Pipeline" plugin. Proceed to install this plugin to enable the utilization of Docker containers as agents within your Jenkins environment. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F912ntj181ytmyy7iau4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F912ntj181ytmyy7iau4k.png" alt="Image description" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgogpmr13keqndbl1d5vw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgogpmr13keqndbl1d5vw.png" alt="Image description" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to "Manage Jenkins" &amp;gt; "Manage Plugins" &amp;gt; "Available" tab, and search for the "SonarQube Scanner" plugin. Install this plugin to enable seamless integration of SonarQube for code quality inspection within your Jenkins environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc5o2kru7xinu3c23on5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc5o2kru7xinu3c23on5.png" alt="Image description" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add new user named SonarQube in the EC2 instance and switch to the new user&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi4rh56iv9crj1wauq0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi4rh56iv9crj1wauq0e.png" alt="Image description" width="742" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the sonar zip file 
wget &lt;a href="https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.4.0.54424.zip"&gt;https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.4.0.54424.zip&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouz7jcqqwqwhzy3eanbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouz7jcqqwqwhzy3eanbz.png" alt="Image description" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To unzip the downloaded SonarQube zip file, you'll need to have the unzip utility installed on your system. You can install it using the apt package manager. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccqjuhhwo7p4n3vl1p88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccqjuhhwo7p4n3vl1p88.png" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switch to sonarqube user and run unzip * &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnenl5d5k9iqrzui7qsze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnenl5d5k9iqrzui7qsze.png" alt="Image description" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkziv6fwrap5sfuq1rpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkziv6fwrap5sfuq1rpt.png" alt="Image description" width="600" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grant permission to the folders &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbjpo1j87x5qiwvyna0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbjpo1j87x5qiwvyna0h.png" alt="Image description" width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the path depending upon your instance architecture &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkazznfshi1kpe621gtm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkazznfshi1kpe621gtm5.png" alt="Image description" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run  ./sonar.sh start   to execute your SonarQube server &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip6beqr5o2m0ss2oag7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip6beqr5o2m0ss2oag7k.png" alt="Image description" width="775" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure inbound rules within your EC2 instance's security group to allow traffic from port 9000, the default port for SonarQube. This step ensures that incoming traffic on port 9000 is permitted, enabling access to the SonarQube application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvtr6091eb41dwo8qc5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvtr6091eb41dwo8qc5r.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access the SonarQube server page by entering your EC2 instance's IP address followed by ":9000" in your web browser's address bar.
Upon reaching the SonarQube server page, use the default credentials to log in:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Username: admin&lt;br&gt;
Password: admin&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue7s0edw5okbq9c2yhum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue7s0edw5okbq9c2yhum.png" alt="Image description" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmmgl95q9hh9mjkm7vb9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmmgl95q9hh9mjkm7vb9.png" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the SonarQube web interface, navigate to "My Account" and then to "Security" settings. Locate the option to generate an access token, found within the user profile or security settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This token will be used to configure the integration between SonarQube and Jenkins. Ensure you securely store the token as it grants access to SonarQube resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6j44row55qgpba631hi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6j44row55qgpba631hi.png" alt="Image description" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Jenkins Page go to ManageJenkins&amp;gt;Credentials&amp;gt;System&amp;gt;GlobalCredentials&amp;gt;AddCredentials  to enter sonarqube token&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8ltfx6jggbvmsk2t3yx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8ltfx6jggbvmsk2t3yx.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdpy9dszzbfdrxd41a2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdpy9dszzbfdrxd41a2c.png" alt="Image description" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Docker in root user in your EC2 terminal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ewl5c6gk5blv4zrc3aa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ewl5c6gk5blv4zrc3aa.png" alt="Image description" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grant Jenkins and Ubuntu user permission to access docker daemon and restart docker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nbzozsetccnzxbohs4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nbzozsetccnzxbohs4z.png" alt="Image description" width="684" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restart Jenkins to make sure the plugins installed will function properly. To restart jenkins after port number enter /restart&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuswrj1oqxm3510e07gqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuswrj1oqxm3510e07gqf.png" alt="Image description" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Minikube on your local machine through docker as the driver&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncax6ttadp2i78tdde42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncax6ttadp2i78tdde42.png" alt="Image description" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the ArgoCD operator to streamline the lifecycle management of the Kubernetes controller, ArgoCD, within your Minikube environment. Utilize the resources available at &lt;a href="https://operatorhub.io/operator/argocd-operator"&gt;https://operatorhub.io/operator/argocd-operator&lt;/a&gt; to access and deploy the operator. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d3y8ogzge8mq9qsdjjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d3y8ogzge8mq9qsdjjf.png" alt="Image description" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure DockerHub and GitHub credentials within Jenkins to facilitate seamless integration with these platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwzcwq2r4ks0oe7tuql8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwzcwq2r4ks0oe7tuql8.png" alt="Image description" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For GitHub, navigate to "Settings," then "Personal access tokens," and proceed to "Generate New token (classic)" to create a new token. This token will serve as the authentication mechanism for Jenkins to interact with your GitHub repositories securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9j25gdpjbais5b8bvbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9j25gdpjbais5b8bvbh.png" alt="Image description" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Within Jenkins, employ the "Secret text" type to configure GitHub credentials securely. After creating the credentials, ensure to restart Jenkins to enact the changes and enable seamless authentication for GitHub interactions within your Jenkins environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxswcq8thkq3q7g141cs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxswcq8thkq3q7g141cs.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybbhkqy1udh1l1xnfls7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybbhkqy1udh1l1xnfls7.png" alt="Image description" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build the Jenkins Pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71dt0arscmg1xdfqsln5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71dt0arscmg1xdfqsln5.png" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67vcddk7audae7e631mk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67vcddk7audae7e631mk.png" alt="Image description" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SonarQube report for code analysis &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdikaqdlqdth8vls7duov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdikaqdlqdth8vls7duov.png" alt="Image description" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker registry where the image generated has been pushed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6qui6y0p00yjlmercue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6qui6y0p00yjlmercue.png" alt="Image description" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkkflt8il3a9clt97qyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkkflt8il3a9clt97qyk.png" alt="Image description" width="800" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the docker image name in the deployment yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs5v3wk2ma60ium8aywx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs5v3wk2ma60ium8aywx.png" alt="Image description" width="800" height="794"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a yml file in your local machine and apply this content to install ArgoCD controller&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpfav4g861ceaisggti6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpfav4g861ceaisggti6.png" alt="Image description" width="517" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ub2xth3n1rl1m2m47eb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ub2xth3n1rl1m2m47eb.png" alt="Image description" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure the argocd operator pods are created in MiniKube&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filnb9y2bsoylziogi7ww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filnb9y2bsoylziogi7ww.png" alt="Image description" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the service type from Cluster IP to NodePort for example-argocd-server &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49cushosh85wj3dxnvjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49cushosh85wj3dxnvjt.png" alt="Image description" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj3m3i6zwqleym5mdm8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj3m3i6zwqleym5mdm8v.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomeq70inmzyfofgkhrzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomeq70inmzyfofgkhrzo.png" alt="Image description" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get URL to access the service through web&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw4ug1sd71fgrk80b19l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw4ug1sd71fgrk80b19l.png" alt="Image description" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get the password to argocd account by getting into argocd-cluster secret &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnip8j75skub9krm3ahv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnip8j75skub9krm3ahv.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decode your base 64 encrypted secret &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60vr4uawryd4pknj2ay1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60vr4uawryd4pknj2ay1.png" alt="Image description" width="782" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the credentials in ArgoCD web interface &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vbqby3fzlxa0wa60c8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vbqby3fzlxa0wa60c8m.png" alt="Image description" width="460" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn198psnlxl49lwu1xeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn198psnlxl49lwu1xeg.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create application in argocd and configure required information &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp71qcehgxqkpag58lw8u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp71qcehgxqkpag58lw8u.png" alt="Image description" width="566" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjibpiqiple35fgy2r46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjibpiqiple35fgy2r46.png" alt="Image description" width="800" height="782"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02o9c3j4ajtec6nsrrkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02o9c3j4ajtec6nsrrkf.png" alt="Image description" width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyvcyk34k4f2qadjkf7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyvcyk34k4f2qadjkf7a.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The spring boot application has been deployed in our minikube cluster through argocd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39uu3zd9utnoyvmjup8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39uu3zd9utnoyvmjup8m.png" alt="Image description" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the deployment image to nginx in the minikube&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd62w7s087z36sf9aby1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffd62w7s087z36sf9aby1.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr1ogeis5v532t0wdhb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr1ogeis5v532t0wdhb3.png" alt="Image description" width="467" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ArgoCD recognize that the image in manifest yml is different from the image executing on minikube cluster it starts to rollback to image mentioned in the manifest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xa6d96e2t80881wvyt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xa6d96e2t80881wvyt1.png" alt="Image description" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza94e2bhzyff5w4tplsr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza94e2bhzyff5w4tplsr.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>docker</category>
      <category>maven</category>
      <category>sonarqube</category>
    </item>
    <item>
      <title>Connecting to and Configuring AWS EC2 Instances Using PowerShell and AWS CLI</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Sun, 14 Jan 2024 18:10:00 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/connecting-to-and-configuring-aws-ec2-instances-using-powershell-and-aws-cli-3leg</link>
      <guid>https://dev.to/bharatrajtj/connecting-to-and-configuring-aws-ec2-instances-using-powershell-and-aws-cli-3leg</guid>
      <description>&lt;p&gt;Connecting to an Amazon EC2 instance and configuring it with your AWS account is a fundamental step in managing cloud resources. In this article, we'll explore the process using PowerShell and AWS CLI, catering to both Linux and Windows users.&lt;br&gt;
&lt;strong&gt;Connecting to EC2 Instances Using PowerShell:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are a Windows user, PowerShell provides a convenient way to connect to your EC2 instance. Open a PowerShell session and use the following command:&lt;br&gt;
&lt;code&gt;ssh -i C:\Path\To\Your\KeyPair.pem ec2-user@your-instance-public-ip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Replace "C:\Path\To\Your\KeyPair.pem" with the path to your private key file and "your-instance-public-ip" with your EC2 instance's public IP address. Note that the username "ec2-user" is commonly used for Amazon Linux distributions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Securing Your Key Pair:&lt;/strong&gt;&lt;br&gt;
To enhance security, restrict access to your key pair by setting appropriate permissions. Execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 600 C:\Path\To\Your\KeyPair.pem

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that only the owner has read/write permissions, preventing unauthorized access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykfbfojp16jfoizyntcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykfbfojp16jfoizyntcz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring EC2 Instances with AWS Account:&lt;/strong&gt;&lt;br&gt;
Once connected to your EC2 instance, configure it with your AWS account using AWS CLI. Follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate Access Keys:&lt;/strong&gt;&lt;br&gt;
In the AWS console, navigate to "Security Credentials."&lt;br&gt;
Generate an access key and secret access key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure AWS CLI:&lt;/strong&gt;&lt;br&gt;
In your EC2 instance terminal, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Provide your access key, secret access key, default region, and set the output format to JSON.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42xfsvw3nyvo9fvyv977.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42xfsvw3nyvo9fvyv977.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y5ctkpp8mrwe6h50bty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y5ctkpp8mrwe6h50bty.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Confirm AWS CLI installation with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpelrmbshpfjvh0hr1gtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpelrmbshpfjvh0hr1gtt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your EC2 instance is now configured to interact with AWS services through AWS CLI.&lt;br&gt;
&lt;strong&gt;Using AWS CLI to Access AWS Services:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List S3 buckets in the configured region:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 ls

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh8xwiiyu8q8got5au09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh8xwiiyu8q8got5au09.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa09ar1ya46prll0tk8lb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa09ar1ya46prll0tk8lb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create an S3 bucket in the configured region:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 mb s3://bucketname

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8elzohqh8aqc4ads0pm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8elzohqh8aqc4ads0pm2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedozgn5cpyeh8guti0wh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedozgn5cpyeh8guti0wh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create an EC2 instance using AWS CLI:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 run-instances --image-id ami-xxxxxxxxxxxxxxxxx --instance-type t2.micro

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace "ami-xxxxxxxxxxxxxxxxx" with the desired AMI ID and adjust the instance type accordingly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuvfm9g61x7wbg3ezhrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuvfm9g61x7wbg3ezhrk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23mvgbl706r3s8lq4br4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23mvgbl706r3s8lq4br4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
By combining PowerShell and AWS CLI, you can seamlessly connect to and configure your EC2 instances, providing a powerful and efficient workflow for managing your AWS resources.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>awscli</category>
    </item>
    <item>
      <title>Kubernetes ConfigMaps and Secrets: A Developer's Guide</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Sun, 19 Nov 2023 18:39:49 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/kubernetes-configmaps-and-secrets-a-developers-guide-15nm</link>
      <guid>https://dev.to/bharatrajtj/kubernetes-configmaps-and-secrets-a-developers-guide-15nm</guid>
      <description>&lt;p&gt;Kubernetes, with its powerful orchestration capabilities, relies on resources like ConfigMaps and Secrets to streamline the management of configuration data within a cluster. Let's delve into these essential components and explore their nuances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ConfigMaps: Unlocking Configurations&lt;/strong&gt;&lt;br&gt;
ConfigMaps serve as the go-to solution for storing configuration details, be it key-value pairs or entire files. Once deployed in a Kubernetes cluster, ConfigMaps can be effortlessly mounted into any pod, allowing pods to retrieve crucial information stored as environment variables or files. This flexibility empowers developers to access and utilize configuration data as needed, without the hassle of pod restarts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets: Safeguarding Sensitive Data&lt;/strong&gt;&lt;br&gt;
In the realm of sensitive information, Secrets take the spotlight. The impetus behind Secrets is rooted in the realization that data saved in ConfigMaps is stored in etcd as an object. This raises concerns about security, especially in the face of potential hacker exploits gaining access to etcd.&lt;/p&gt;

&lt;p&gt;To fortify against such threats, Kubernetes encrypts the information entered into Secrets at rest. This encryption occurs before the data is transported to etcd, providing an additional layer of security. Kubernetes even allows users to implement custom encryption for heightened protection. Consequently, even if a hacker breaches etcd, accessing Secrets becomes a formidable challenge without the requisite decryption keys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices for Secrets Implementation&lt;/strong&gt;&lt;br&gt;
Implementing Secrets necessitates a thoughtful approach to security. Strong Role-Based Access Control (RBAC) should be a cornerstone of your strategy. Not every user should be granted access to Secrets resources, ensuring that sensitive information remains tightly guarded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overcoming ConfigMap Limitations&lt;/strong&gt;&lt;br&gt;
One notable limitation of ConfigMaps is the inability to update or change environment variables once they are loaded into a container. To circumvent this constraint, volume mounts come to the rescue. By saving ConfigMap data as files instead of environment variables, developers gain the flexibility to modify configurations dynamically, enhancing the adaptability of Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hands-on&lt;/strong&gt;&lt;br&gt;
This is a simple ConfigMap file with db-port as the Key and 8000 as the value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--84mjoJlo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vsqwllb7b2s5vs4spcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--84mjoJlo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vsqwllb7b2s5vs4spcs.png" alt="Image description" width="351" height="339"&gt;&lt;/a&gt;&lt;br&gt;
Apply the configMap and check whether it is deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9C5-yGNj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lscpfr35u9kzrfsglb1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9C5-yGNj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lscpfr35u9kzrfsglb1u.png" alt="Image description" width="778" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Update the deployment file to add ENV&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IfRF8tDK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0f64v2ppjg0o253pjcm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IfRF8tDK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0f64v2ppjg0o253pjcm.png" alt="Image description" width="609" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get the pod name and exec into a pod, search for ENV &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DzeV9EYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y59adepxlzph3k633wm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DzeV9EYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y59adepxlzph3k633wm.png" alt="Image description" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, change the data value in ConfigMap&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P-TGpYX9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gydgizdapca37vvfjgca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P-TGpYX9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gydgizdapca37vvfjgca.png" alt="Image description" width="301" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This change will not be reflected inside the pods as env will not be updated. So, we create a volume and mount this volume to the pod&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uRYID3fJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmq6e1r8avqxgaocp4pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uRYID3fJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmq6e1r8avqxgaocp4pl.png" alt="Image description" width="572" height="847"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yq8jnGJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5br4604mmogx84fypzma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yq8jnGJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5br4604mmogx84fypzma.png" alt="Image description" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>orchestration</category>
      <category>configmap</category>
      <category>pods</category>
    </item>
    <item>
      <title>Navigating Kubernetes: A Guide to Services</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Mon, 13 Nov 2023 12:09:39 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/navigating-kubernetes-a-guide-to-services-1a0</link>
      <guid>https://dev.to/bharatrajtj/navigating-kubernetes-a-guide-to-services-1a0</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
In the dynamic world of Kubernetes, managing pod IPs can be a tricky business. When a pod goes down and is replaced, the new pod often comes with a new IP address, leaving users in the dark about the updated address. Enter Kubernetes Services, the unsung heroes of seamless communication and accessibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;&lt;br&gt;
Picture this: a pod in your deployment crashes, and a fresh replacement takes its place. But wait, the new pod has a different IP address. How do you ensure users seamlessly transition to the replacement without any hiccups?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Kubernetes Service Solution&lt;/strong&gt;&lt;br&gt;
Kubernetes addresses this challenge with the introduction of services. These are like traffic managers, ensuring that users don't need to keep track of ever-changing pod IPs. Instead, users interact with the service's IP address, and the service efficiently forwards their requests to the available pods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Labels and Service Discovery&lt;/strong&gt;&lt;br&gt;
Each pod comes with a label, and services use these labels for efficient traffic routing. Even if a new pod with a new IP address replaces a failed one, the label remains constant. This magic is made possible by selectors, allowing services to identify pods based on these labels—enter the world of service discovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Using Services&lt;/strong&gt;&lt;br&gt;
Let's break down why services are a game-changer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Balancing:&lt;/strong&gt; Services distribute traffic among available pods, ensuring optimal resource utilization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Discovery:&lt;/strong&gt; Thanks to labels and selectors, automatic discovery and routing of traffic become a breeze.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exposure:&lt;/strong&gt; Services provide various exposure options, including Cluster IP, Load Balancer, and Node Port.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hands-On Experience&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step 1: Build Image&lt;/strong&gt;&lt;br&gt;
Clone the repository and navigate to the Python web app directory to build the Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/iam-veeramalla/Docker-Zero-to-Hero.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to examples direcctory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd examples
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to python-web-app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd python-web-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ga9wLzGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uc05t5eg699nmxucxnsp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ga9wLzGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uc05t5eg699nmxucxnsp.png" alt="Image description" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Deployment&lt;/strong&gt;&lt;br&gt;
Create a deployment using the provided YAML file, then check the deployment list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: pythonapp
  labels:
    app: demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: application
        image: bharatdevops/images:v1
        ports:
        - containerPort: 8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the deployment using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the deployment list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Cz1TnSZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l0o1vz3wy8w4aacf3ivy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Cz1TnSZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l0o1vz3wy8w4aacf3ivy.png" alt="Image description" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Pod IP Address&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Obtain pod IP addresses, delete pods, and witness the creation of new pods with different IP addresses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9Lf8D7Hr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/857b3mrime8yytdba1t3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9Lf8D7Hr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/857b3mrime8yytdba1t3.png" alt="Image description" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Create Service&lt;/strong&gt;&lt;br&gt;
Create a NodePort service using the provided YAML file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app: demo
  ports:
    - port: 80
      targetPort: 8000
      nodePort: 30007
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the service using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JTs6Rxc5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jno47k1fn4m9qc6iqup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JTs6Rxc5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jno47k1fn4m9qc6iqup.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Access the Application&lt;/strong&gt;&lt;br&gt;
Access the application using the Minikube IP address mapped to the NodePort.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eY_7ZZPN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k38fx58qhz1yqlhaaqlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eY_7ZZPN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k38fx58qhz1yqlhaaqlz.png" alt="Image description" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This can not be accessed in website out of the organizations network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ivRfG_Qw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q6dv15412iaxv7hbdix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ivRfG_Qw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q6dv15412iaxv7hbdix.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Cluster IP&lt;/strong&gt;&lt;br&gt;
Explore the Cluster IP within the Minikube cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R_4LjTQl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29zil4g21ypdwgghysc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R_4LjTQl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29zil4g21ypdwgghysc0.png" alt="Image description" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This cannot be accessed via website&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a5bV6fOM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mosikov9z7uqrohy22rh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a5bV6fOM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mosikov9z7uqrohy22rh.png" alt="Image description" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>orchestration</category>
      <category>minikube</category>
    </item>
    <item>
      <title>Kubernetes Deployments</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Tue, 07 Nov 2023 00:36:44 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/kubernetes-deployments-5a6i</link>
      <guid>https://dev.to/bharatrajtj/kubernetes-deployments-5a6i</guid>
      <description>&lt;p&gt;In the realm of Kubernetes orchestration, containers follow a structured hierarchy. Fundamentally, pods house containers, organized within replica sets. Taking it a step further, replica sets are orchestrated within a Deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Deployment Cascade&lt;/strong&gt;&lt;br&gt;
Initiating a Deployment sets off a chain reaction, spawning a replica set that hosts pods. This replica set acts as a Kubernetes controller, ensuring automatic healing for pods and maintaining alignment between desired and actual states.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment vs. Replica Set&lt;/strong&gt;&lt;br&gt;
Both Deployment and replica set serve as Kubernetes controllers, but Deployment goes the extra mile. It enables seamless updates to container versions within pods. Unlike replica sets, which recreate pods with initial configurations, Deployment empowers users to upgrade or roll back container versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upgrading with Deployment&lt;/strong&gt;&lt;br&gt;
During a container version update, Deployment crafts a new replica set housing the updated version. This ensures a smooth transition. The default strategy is the Rolling Update, involving creating a new replica set, terminating some pods from the current set, introducing pods with the latest configuration, and eventually phasing out the remaining pods for the new replica set.&lt;/p&gt;

&lt;p&gt;Undoing Changes&lt;br&gt;
Deployment's versatility shines when changes need to be undone. Whether rolling back a version or reverting configuration, Deployment supports an undo operation. Executing this operation reverts to a previous replica set, effectively restoring pods to their earlier state.&lt;/p&gt;

&lt;p&gt;In essence, Kubernetes Deployments provide a robust and flexible approach to managing containerized applications, ensuring a controlled deployment lifecycle.&lt;/p&gt;

&lt;p&gt;Check out this deployment manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Executing this manifest results in one deployment with three replica sets and three pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jWr8tVFh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afsrdlmut4cidfareu7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jWr8tVFh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afsrdlmut4cidfareu7n.png" alt="deployment" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a pod is deleted, the replica set detects the change in actual state versus desired state and creates a new pod to maintain balance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pz3PPZ2r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zdckiexh8j0w3t171bc9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pz3PPZ2r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zdckiexh8j0w3t171bc9.png" alt="replica set maintaining the state" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding the Core Components of Kubernetes Architecture</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Fri, 03 Nov 2023 04:45:22 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/understanding-the-core-components-of-kubernetes-architecture-1655</link>
      <guid>https://dev.to/bharatrajtj/understanding-the-core-components-of-kubernetes-architecture-1655</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
Kubernetes, a powerful container orchestration platform, operates within a cluster environment consisting of master and worker nodes. In this article, we'll delve into the key components of Kubernetes architecture, shedding light on the roles of the master and worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Master Node Components:&lt;/strong&gt;&lt;br&gt;
The Master node, known as the Control Plane, is the brain of the Kubernetes cluster. It consists of several crucial components that collectively manage the orchestration of containers within the cluster.&lt;br&gt;
&lt;strong&gt;1. API Server:&lt;/strong&gt;&lt;br&gt;
The API Server serves as the communication hub, making decisions on pod deployments and conveying information to the scheduler. It plays a pivotal role in coordinating activities within the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Scheduler:&lt;/strong&gt;&lt;br&gt;
Responsible for pod deployment decisions, the Scheduler collaborates with the API Server to determine the optimal worker node for launching pods. It plays a crucial role in efficiently distributing workloads across the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. ETCD:&lt;/strong&gt;&lt;br&gt;
Functioning as a key-value store, ETCD collects node information from Kubelet and provides essential cluster information to the API Server. It acts as a reliable source of truth for the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Controller:&lt;/strong&gt;&lt;br&gt;
The Controller oversees various aspects, including node and replica controllers, ensuring the seamless management of pods and endpoint in the Kubernetes environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Cloud Controller Manager:&lt;/strong&gt;&lt;br&gt;
In cloud provider environments like AWS (EKS) or Azure (AKS), the Cloud Controller Manager configures logic for the cloud provider API server. It facilitates integration with cloud-specific features Notably, it is not a requisite component for on-premise Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Worker Node Components:&lt;/strong&gt;&lt;br&gt;
The worker node, integral to the Kubernetes architecture, consists of components that manage the execution of pods and handle networking responsibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Kubelet:&lt;/strong&gt;&lt;br&gt;
Running on every worker node, Kubelet monitors pod states, updates information to the API Server, and executes commands received from the Scheduler. It ensures the auto-healing capability of the node by promptly responding to pod failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Kube Proxy:&lt;/strong&gt;&lt;br&gt;
Responsible for node networking, Kube Proxy provides crucial functions such as IP address assignment, network mechanism selection for communication between pods within nodes and among the nodes, and load balancer configuration. It plays a vital role in maintaining seamless communication between nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Container Runtime:&lt;/strong&gt;&lt;br&gt;
The container runtime serves as the execution environment for containers. Examples include containerd, Docker, and other runtime environment can also be employed in Kubernetes making it more flexible to choose among variety of container environment available in the market. It plays a crucial role in launching and managing containers within the worker node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Understanding the intricacies of Kubernetes architecture is essential for effectively deploying and managing containerized applications. By exploring the roles of master and worker node components, we gain valuable insights into the orchestration processes that power Kubernetes clusters.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>orchestration</category>
      <category>docker</category>
    </item>
    <item>
      <title>Retrieving Images from S3 using Lambda Function and API Gateway</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Thu, 15 Jun 2023 01:33:00 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/retrieving-images-from-s3-using-lambda-function-and-api-gateway-3ei9</link>
      <guid>https://dev.to/bharatrajtj/retrieving-images-from-s3-using-lambda-function-and-api-gateway-3ei9</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
In this article, we will explore how to retrieve images from an Amazon S3 bucket using a Lambda function through the API Gateway. We will cover the step-by-step process, including creating an S3 bucket, uploading an image, writing a Lambda function, and configuring the API Gateway to access the images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Creating an S3 Bucket and Uploading the Image:&lt;/strong&gt;&lt;br&gt;
To begin, we need to create an S3 bucket and upload an image into it. This step ensures that we have a source from which to retrieve images using the Lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OZ2h-vAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s807h7s67e4xtc2093i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OZ2h-vAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s807h7s67e4xtc2093i.png" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An S3 bucket named beachhhh is created and two jpg files are uploaded into it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Creating the Lambda Function:&lt;/strong&gt;&lt;br&gt;
The next step involves creating a Lambda function. We will write a Python function that utilizes the AWS SDK, boto3, to interact with AWS services. The code snippet will include importing the necessary libraries, initializing the S3 client, and defining the Lambda handler.&lt;br&gt;
&lt;code&gt;import base64&lt;br&gt;
import boto3&lt;br&gt;
s3 = boto3.client('s3')&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Implementing the Lambda Handler:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;def lambda_handler(event, context):&lt;br&gt;
    bucket_name = event["pathParameters"]["bucket"]&lt;br&gt;
    file_name = event["queryStringParameters"]["file"]&lt;br&gt;
    fileObj = s3.get_object(Bucket=bucket_name, Key=file_name)&lt;br&gt;
    file_content = fileObj["Body"].read()&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
The Lambda handler is the entry point for the Lambda function. We will extract the bucket name and file name from the event object, which contains information about the request. Using the bucket name and file name, we will retrieve the file content from the S3 bucket using the &lt;code&gt;get_object&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Constructing the Response:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;return {&lt;br&gt;
        "statusCode": 200,&lt;br&gt;
        "headers": {&lt;br&gt;
            "Content-Type": "application/jpg",&lt;br&gt;
            "Content-Disposition": "attachment; filename {}".format(file_name)&lt;br&gt;
        },&lt;br&gt;
        "body": base64.b64encode(file_content),&lt;br&gt;
        "isBase64Encoded": True&lt;br&gt;
    }&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
After retrieving the file content, we will construct the response to be returned by the Lambda function. This includes setting the appropriate status code, content type, and content disposition headers. We will encode the file content in base64 format and include it in the response body. Finally, we set the &lt;code&gt;isBase64Encoded&lt;/code&gt; flag to indicate that the response body is base64 encoded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Configuring Lambda Function Permissions:&lt;/strong&gt;&lt;br&gt;
To allow the Lambda function to access the S3 bucket, we need to assign it a role with the necessary permissions. This can be done by navigating to the Lambda function's permissions tab and configuring the execution role. Here, we can define the required S3 access permissions for the Lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TXTLQziT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/056stog9cthy3a8cnd8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TXTLQziT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/056stog9cthy3a8cnd8z.png" alt="Image description" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Setting Up API Gateway:&lt;/strong&gt;&lt;br&gt;
Moving on to the API Gateway, we will create a new REST API. We'll define a resource with a path parameter for the bucket name and a query parameter for the file name. Enforcing a request validator ensures that the query parameter is present in the API request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TDCUMQJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tl1ajatycgqtgiws2za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TDCUMQJk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tl1ajatycgqtgiws2za.png" alt="Image description" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yQYqHNCe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnzy9oh0at72zq8cc382.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yQYqHNCe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnzy9oh0at72zq8cc382.png" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Configuring Binary Media Type:&lt;/strong&gt;&lt;br&gt;
To handle binary data, such as images, we need to configure the API Gateway to treat certain media types as binary. By setting the "binary media type" to &lt;code&gt;*/*&lt;/code&gt;, we ensure that the API Gateway correctly handles image data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BBo2yzFS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbks05xttf61fezmemus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BBo2yzFS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbks05xttf61fezmemus.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Deploying the API and Testing:&lt;/strong&gt;&lt;br&gt;
With the API Gateway configured, we can deploy the API and obtain the endpoint URL. We can test the functionality using tools like Postman by sending a GET request to the URL. In the request, we specify the bucket name as part of the URL path and the file name as a query parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HV5Ddp1w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9c4an0r5jryxpcejb8ah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HV5Ddp1w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9c4an0r5jryxpcejb8ah.png" alt="Image description" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bBYikYT1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19on5pl2mhxjjl088e9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bBYikYT1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19on5pl2mhxjjl088e9y.png" alt="Image description" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cFTrpTvt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izxtlcmtc4iykles4ywz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cFTrpTvt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izxtlcmtc4iykles4ywz.png" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
By following the steps outlined in this article, you can effectively retrieve images stored in an Amazon S3 bucket using a Lambda function and API Gateway. This integration enables seamless image retrieval and facilitates the development of image-based a&lt;/p&gt;

</description>
      <category>s3imageretrieval</category>
      <category>aws</category>
      <category>lambdafunction</category>
      <category>apigateway</category>
    </item>
    <item>
      <title>Implementing Pagination in DynamoDB for Efficient Data Retrieval</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Sat, 10 Jun 2023 03:38:07 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/implementing-pagination-in-dynamodb-for-efficient-data-retrieval-3m5j</link>
      <guid>https://dev.to/bharatrajtj/implementing-pagination-in-dynamodb-for-efficient-data-retrieval-3m5j</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
Pagination is a crucial concept in DynamoDB for efficiently retrieving large sets of records. By default, DynamoDB has a querying limit of 1 MB, which means it only returns data within that size. However, by implementing pagination, you can break down the records into smaller, manageable chunks, enabling you to retrieve data iteratively and avoid exceeding the limit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Query Page Size:&lt;/strong&gt;&lt;br&gt;
Query page size refers to breaking the records in DynamoDB into smaller chunks. By specifying the query page size, you determine the number of records to be queried in each iteration. However, it's important to note that the 1 MB limit still applies to the query page size. For instance, if the page size is set to 3, but the combined size of the first two records exceeds 1 MB, DynamoDB will ignore the third record. Carefully assigning the query page size is crucial for efficient pagination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Exclusive Start Key:&lt;/strong&gt;&lt;br&gt;
The exclusive start key serves as a pointer indicating the record from which the data querying should start. By default, the exclusive start key is set to null, which means pagination begins with the first record in the table. As you retrieve each set of records, you can use the last evaluated key from the previous query as the exclusive start key for the next iteration, enabling sequential data retrieval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Last Evaluated Key:&lt;/strong&gt;&lt;br&gt;
The last evaluated key is the key of the last record that DynamoDB reads in a pagination operation. To continue pagination, you need to provide the last evaluated key as the exclusive start key in the subsequent query. When DynamoDB returns a null value for the last evaluated key, it indicates that there are no more records remaining, and pagination is complete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Implementing pagination in DynamoDB allows you to efficiently retrieve large sets of records by breaking them into smaller, manageable chunks. By carefully assigning the query page size and utilizing the exclusive start key and last evaluated key, you can retrieve data sequentially and overcome the 1 MB querying limit. Pagination is a powerful technique for optimizing data retrieval in DynamoDB and ensuring efficient handling of large datasets.&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>pagination</category>
      <category>dataretrieval</category>
      <category>aws</category>
    </item>
    <item>
      <title>Understanding DynamoDB Scan and Query Operations: A Cost-Efficient Approach to Retrieving Data</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Wed, 07 Jun 2023 02:02:53 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/understanding-dynamodb-scan-and-query-operations-a-cost-efficient-approach-to-retrieving-data-27h3</link>
      <guid>https://dev.to/bharatrajtj/understanding-dynamodb-scan-and-query-operations-a-cost-efficient-approach-to-retrieving-data-27h3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
DynamoDB, a fully managed NoSQL database service by Amazon Web Services (AWS), offers two options for retrieving data from tables: Scan and Query. While Scan may seem like a convenient choice, it is crucial to understand its limitations and cost implications. This article aims to shed light on the Scan and Query operations in DynamoDB, emphasizing the importance of avoiding Scan in a production environment due to its expense. Instead, we'll explore the benefits of utilizing the Query operation for efficient data retrieval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan Operation:&lt;/strong&gt;&lt;br&gt;
The Scan operation retrieves data from a DynamoDB table, but it should be used sparingly. When using Scan, the entire table is read before providing the desired result, making it an expensive operation. DynamoDB pricing is based on Read Capacity Units (RCU) and Write Capacity Units (WCU), and Scan consumes a considerable amount of RCU as it reads all the rows. Even though a filter expression can be added to the Scan command, it only applies after reading all the records, resulting in no cost difference. For larger tables, the RCU consumption and subsequent costs can be substantial. Hence, it is strongly advised to avoid using Scan in a production environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2ErJmhWZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n548kxnm9jf6d6qosan5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2ErJmhWZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n548kxnm9jf6d6qosan5.png" alt="Image description" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query Operation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Query operation is an efficient alternative for retrieving data from DynamoDB tables. It leverages the partition key, or a combination of partition key and sort key in the case of a composite key, to retrieve specific data. Like Scan, Query also supports filter expressions, but with a significant difference in cost calculation. Instead of applying RCU for the entire table, the RCU is applied only to the entries that correspond to the filter expression. This approach leads to substantial cost savings for organizations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l-cfbdlr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ftwv9gshohd4ee123t6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l-cfbdlr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ftwv9gshohd4ee123t6.png" alt="Image description" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Utilizing Query:&lt;/strong&gt;&lt;br&gt;
Query is a suitable choice when the partition key is known. By specifying the partition key, we can retrieve the desired data efficiently. If a sort key is present in the table, it can further simplify the query by narrowing down the results based on the desired sort order. However, if the partition key is unknown, the Query command cannot be implemented effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Understanding the differences between the Scan and Query operations in DynamoDB is crucial for optimizing data retrieval and minimizing costs. While Scan provides a straightforward way to access data, its expense makes it unsuitable for production environments. On the other hand, Query offers a more efficient and cost-effective approach, utilizing partition keys and filter expressions to retrieve specific data. By leveraging Query wisely and avoiding Scan, organizations can maximize the benefits of DynamoDB while optimizing resource utilization and cost efficiency.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>dynamodb</category>
      <category>aws</category>
      <category>scan</category>
    </item>
    <item>
      <title>Understanding DynamoDB's Primary Keys and Partitions</title>
      <dc:creator>bharatrajtj</dc:creator>
      <pubDate>Sat, 03 Jun 2023 02:51:14 +0000</pubDate>
      <link>https://dev.to/bharatrajtj/understanding-dynamodbs-primary-keys-and-partitions-24nf</link>
      <guid>https://dev.to/bharatrajtj/understanding-dynamodbs-primary-keys-and-partitions-24nf</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
DynamoDB, an AWS NoSQL service, provides efficient and scalable data storage. When creating a DynamoDB table, it is crucial to understand primary keys and partitions. This article will delve into the two types of primary keys, simple and composite, and explain how partitions work in DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Simple Key:&lt;/strong&gt;&lt;br&gt;
A simple key consists of a partition key, which is a unique identifier used for grouping items within a table. DynamoDB organizes data based on this partition key, allowing quick retrieval of related items. In the below table, the partition key is defined as notes id.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dTAzB-Wz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/plqm7bp8kdr5lbxjoxk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dTAzB-Wz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/plqm7bp8kdr5lbxjoxk0.png" alt="Image description" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you attempt to create another primary key with the same value in a simple key structure, it will result in an error since the partition key must be unique.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jLyVns8P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xljquk4qvg2g3i48tbu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jLyVns8P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xljquk4qvg2g3i48tbu.png" alt="Image description" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Composite Key:&lt;/strong&gt;&lt;br&gt;
A composite key combines a partition key and a sort key. This key type overcomes the limitations of a simple key by allowing multiple attributes with the same partition key, as long as the sort key is unique. If a combination of partition key and sort key matches an existing entry in the table, an error will occur.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HXrEx6bl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy1nhp1o6wzo981mbwe1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HXrEx6bl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy1nhp1o6wzo981mbwe1.png" alt="Image description" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the below image Release year has been defined as Primary Key and Movie name as Sort key. As you can see we have many items with the same Release year (Primary Key) and different Movie Name (Sort Key)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2JLJR48Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/733qox2jm88qajhbc13s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2JLJR48Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/733qox2jm88qajhbc13s.png" alt="Image description" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Partitions:&lt;/strong&gt;&lt;br&gt;
Partitions play a vital role in DynamoDB's data storage and retrieval process. Each partition key is hashed by a request router, which determines the partition in whichthe data will be stored. Items with the same partition key share the same hash function and are stored in the same partition. If a sort key is present, items are stored within the partition in ascending order based on the sort key's value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partition Limits and Provisioning:&lt;/strong&gt;&lt;br&gt;
Each partition in DynamoDB can store data up to 10 GB. Partition management is handled by AWS, and users can provision partitions with a maximum of 3000 Read Capacity Units (RCUs) and 1000 Write Capacity Units (WCUs).&lt;/p&gt;

&lt;p&gt;To determine the minimum number of partitions required, the following formula can be used:&lt;br&gt;
(Number of RCU/3000) + (Number of WCU/1000)&lt;/p&gt;

&lt;p&gt;For example, if you have 1500 RCU and 500 WCU:&lt;br&gt;
(1500/3000) + (500/1000) = 0.5 + 0.5 = 1 partition&lt;/p&gt;

&lt;p&gt;If you have 3000 RCU and 500 WCU:&lt;br&gt;
(3000/3000) + (500/1000) = 1 + 0.5 = 1.5, rounded to 2 partitions.&lt;br&gt;
DynamoDB scales and manage partitions automatically based on the workload and storage requirements.&lt;/p&gt;

&lt;p&gt;It's important to note that RCU and WCU are equally distributed among the available partitions. If the request rate to a partition exceeds its provisioned capacity, throttling may occur. Adaptive provisioning is a technique used to share RCU and WCU units among partitions to avoid such issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choosing Partition Keys:&lt;/strong&gt;&lt;br&gt;
Selecting a suitable partition key is crucial for efficient data distribution and preventing hot partitions. It is recommended to choose a partition key with high cardinality values to evenly distribute the request rate across partitions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Understanding primary keys and partitions in DynamoDB is essential for designing scalable and performant database tables. By selecting appropriate primary key types and carefully choosing partition keys, you can optimize data retrieval and avoid bottlenecks in your DynamoDB applications.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
