<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prajwal Patil</title>
    <description>The latest articles on DEV Community by Prajwal Patil (@prajwal2023).</description>
    <link>https://dev.to/prajwal2023</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prajwal2023"/>
    <language>en</language>
    <item>
      <title>"Streamlining CI/CD: A Step-by-Step Guide to Setting Up Jenkins on Docker"</title>
      <dc:creator>Prajwal Patil</dc:creator>
      <pubDate>Sun, 26 May 2024 10:00:49 +0000</pubDate>
      <link>https://dev.to/prajwal2023/streamlining-cicd-a-step-by-step-guide-to-setting-up-jenkins-on-docker-2b6</link>
      <guid>https://dev.to/prajwal2023/streamlining-cicd-a-step-by-step-guide-to-setting-up-jenkins-on-docker-2b6</guid>
      <description>&lt;p&gt;How to Set Up Jenkins on Docker&lt;br&gt;
&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Jenkins is a widely-used open-source automation server that helps automate the non-human part of the software development process. Docker, on the other hand, is a platform that enables developers to create, deploy, and run applications in containers. Combining Jenkins with Docker provides a powerful tool for continuous integration and continuous delivery (CI/CD).&lt;br&gt;
In this guide, we'll walk through the steps to set up Jenkins on Docker.&lt;br&gt;
&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed on your system.&lt;/li&gt;
&lt;li&gt;Basic understanding of Docker and Jenkins.&lt;/li&gt;
&lt;li&gt;Sufficient privileges to run Docker commands.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting up Jenkins on Docker offers several advantages that can streamline and enhance your CI/CD workflow. Here are the key reasons why you might choose to deploy Jenkins using Docker:&lt;br&gt;
&lt;strong&gt;1. Consistency and Isolation&lt;/strong&gt;&lt;br&gt;
Consistency: Docker containers ensure that Jenkins runs in a consistent environment across different development, testing, and production environments. This consistency helps eliminate issues caused by variations in software configurations.&lt;br&gt;
Isolation: Docker containers isolate Jenkins and its dependencies from other applications on the host system. This isolation helps prevent conflicts and makes it easier to manage dependencies.&lt;br&gt;
&lt;strong&gt;2. Simplified Setup and Configuration&lt;/strong&gt;&lt;br&gt;
Ease of Setup: Docker simplifies the setup process by allowing you to pull and run pre-configured Jenkins images. This reduces the complexity involved in manually installing Jenkins and configuring its environment.&lt;br&gt;
Configuration Management: Docker makes it easy to version and manage configurations through Dockerfiles and Docker Compose, ensuring that your Jenkins setup can be easily replicated or modified.&lt;br&gt;
&lt;strong&gt;3. Portability&lt;/strong&gt;&lt;br&gt;
Docker containers can run on any system that supports Docker, making your Jenkins setup highly portable. This portability is particularly useful for developers working in different environments or for teams that need to move their CI/CD pipeline across various stages of development and production.&lt;br&gt;
&lt;strong&gt;4. Scalability&lt;/strong&gt;&lt;br&gt;
Resource Allocation: Docker allows you to allocate specific resources (CPU, memory) to Jenkins containers, ensuring that Jenkins performs optimally without affecting other applications.&lt;br&gt;
Scaling: Running Jenkins in Docker containers makes it easier to scale your CI/CD infrastructure. You can quickly spin up additional Jenkins instances to handle increased workloads or parallelize build processes.&lt;br&gt;
&lt;strong&gt;5. Simplified Maintenance and Upgrades&lt;/strong&gt;&lt;br&gt;
Upgrades: Upgrading Jenkins is straightforward with Docker. You can pull the latest Jenkins image and recreate the container without worrying about breaking the underlying system.&lt;br&gt;
Backup and Recovery: Docker volumes can be used to persist Jenkins data, making it easier to backup and restore configurations, jobs, and build history.&lt;br&gt;
&lt;strong&gt;6. Security&lt;/strong&gt;&lt;br&gt;
Sandboxing: Docker containers provide an additional layer of security by sandboxing Jenkins from the host system. This reduces the risk of potential vulnerabilities in Jenkins affecting the host.&lt;br&gt;
Controlled Access: Docker's networking and permission features allow for fine-grained control over how Jenkins interacts with other services and the network.&lt;br&gt;
&lt;strong&gt;7. DevOps Integration&lt;/strong&gt;&lt;br&gt;
Docker is a staple in modern DevOps practices. Running Jenkins on Docker integrates seamlessly with other containerized services and tools in your DevOps pipeline, promoting a more cohesive and efficient workflow.&lt;br&gt;
&lt;strong&gt;Step-by-Step Guide&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Step 1: Install Docker&lt;/strong&gt;&lt;br&gt;
Before setting up Jenkins, ensure Docker is installed on your machine.&lt;br&gt;
For Ubuntu:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Pull the Jenkins Docker Image&lt;/strong&gt;&lt;br&gt;
Jenkins maintains an official Docker image. To pull the latest Jenkins image, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull jenkins/jenkins:lts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The lts tag refers to the Long Term Support version, which is stable and recommended for most users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Run the Jenkins Container&lt;/strong&gt;&lt;br&gt;
Create and start a Jenkins container with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 8080:8080 jenkins/jenkins:lts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a breakdown of the command:&lt;br&gt;
-d: Run the container in detached mode.&lt;br&gt;
-p 8080:8080: Map port 8080 of the host to port 8080 of the container (Jenkins web int)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Access Jenkins&lt;/strong&gt;&lt;br&gt;
Once the container is running, you can access Jenkins by navigating to &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt; in your web browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Unlock Jenkins&lt;/strong&gt;&lt;br&gt;
On your first visit, Jenkins will ask you to unlock it using a password stored in the Docker container. Retrieve this password with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it jenkins bash
cat /var/jenkins_home/secrets/initialAdminPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the password and paste it into the Jenkins unlock page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Install Suggested Plugins&lt;/strong&gt;&lt;br&gt;
After unlocking Jenkins, you'll be prompted to install plugins. Choose the "Install suggested plugins" option to get started quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Create an Admin User&lt;/strong&gt;&lt;br&gt;
Next, you'll need to create an admin user. Fill in the required details and complete the setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Configure Jenkins&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now that Jenkins is set up, you can start configuring it to suit your project needs. This includes setting up:&lt;/li&gt;
&lt;li&gt;Global Tool Configuration: Define the locations for JDK, Git, Gradle, etc.&lt;/li&gt;
&lt;li&gt;Credentials: Add necessary credentials for accessing repositories and other tools.&lt;/li&gt;
&lt;li&gt;Jobs/Pipelines: Create jobs or pipelines for your CI/CD process.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Deploying Jenkins on Docker simplifies the setup and management of your CI/CD pipeline. Docker containers provide a consistent environment for Jenkins, enhancing the reliability and scalability of your build process.&lt;br&gt;
By following the steps outlined in this guide, you will have a fully functional Jenkins server running in a Docker container. This setup allows you to explore and leverage Jenkins' extensive range of plugins and configurations to further optimize your CI/CD workflow.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>jenkins</category>
      <category>docker</category>
    </item>
    <item>
      <title>“Automating VPC Peering in AWS with Terraform”</title>
      <dc:creator>Prajwal Patil</dc:creator>
      <pubDate>Thu, 09 May 2024 05:53:33 +0000</pubDate>
      <link>https://dev.to/prajwal2023/automating-vpc-peering-in-aws-with-terraform-2hgm</link>
      <guid>https://dev.to/prajwal2023/automating-vpc-peering-in-aws-with-terraform-2hgm</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;:&lt;br&gt;
In today’s cloud-centric world, networking infrastructure plays a crucial role in ensuring the connectivity and security of applications and services. One common networking pattern is VPC peering, which allows different Virtual Private Clouds (VPCs) to communicate with each other securely. In this blog post, we’ll explore how to automate the setup of VPC peering using Terraform, a popular Infrastructure as Code (IaC) tool. By leveraging Terraforms declarative syntax and AWS provider, we can simplify the process of configuring VPC peering connections, saving time and reducing the chance of human error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main Content:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding VPC Peering:&lt;/strong&gt; We’ll start by discussing the concept of VPC peering and its significance in cloud networking. This section will cover the benefits of VPC peering, such as improved connectivity between VPCs and reduced data transfer costs.&lt;br&gt;
&lt;strong&gt;Setting Up the Terraform Environment:&lt;/strong&gt; Next, we’ll guide readers through the setup of a Terraform environment for managing AWS resources. This includes installing Terraform, configuring AWS credentials, and initializing a Terraform project.&lt;br&gt;
&lt;strong&gt;Defining VPCs and Internet Gateways:&lt;/strong&gt; In this section, we’ll use Terraform to define two VPCs and create internet gateways for each VPC. These components are essential prerequisites for establishing VPC peering connections.&lt;br&gt;
&lt;strong&gt;Creating VPC Peering Connections:&lt;/strong&gt; Using Terraforms AWS provider, we’ll programmatically create VPC peering connections between the two VPCs defined earlier. We’ll specify the necessary parameters such as VPC IDs and enable auto-acceptance of peering requests.&lt;br&gt;
&lt;strong&gt;Verifying the Peering Connection:&lt;/strong&gt; After deploying the Terraform configuration, we’ll demonstrate how to verify the status of the VPC peering connection using the AWS Management Console or CLI. This step ensures that the peering connection is successfully established and ready for use.&lt;/p&gt;

&lt;p&gt;Launch instance&lt;/p&gt;

&lt;p&gt;Install Terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#! /bin/bash
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update
sudo apt install terraform -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Git&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install awscli&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#! /bin/bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cloning the repo&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/Prajwal2023/vpc-peering.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now hit the &lt;strong&gt;terraform init&lt;/strong&gt; command&lt;br&gt;
“terraform init” is a command used to initialize a Terraform working directory. When you run this command, Terraform reads the configuration files in the directory and downloads any required plugins or modules specified in those files. This command prepares the directory for Terraform operations such as planning, applying, or destroying infrastructure resources. It ensures that the necessary dependencies are available for managing your infrastructure with Terraform.&lt;/p&gt;

&lt;p&gt;Now &lt;strong&gt;terraform plan&lt;/strong&gt;&lt;br&gt;
“terraform plan” is a command used to create an execution plan. When you run this command, Terraform compares the current state of your infrastructure with the desired state defined in your Terraform configuration files. It then generates an execution plan that outlines what actions Terraform will take to achieve the desired state. The plan includes information about which resources will be created, modified, or destroyed. Running “terraform plan” allows you to preview the changes that Terraform will make to your infrastructure before actually applying them. This helps you verify that the planned changes are as expected and provides an opportunity to review and confirm them before proceeding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform validate&lt;/strong&gt;&lt;br&gt;
“terraform validate” checks Terraform configuration files for errors, ensuring correct syntax and structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform apply&lt;/strong&gt;&lt;br&gt;
“terraform apply” is a command used in Terraform to apply the changes described in your Terraform configuration files to your infrastructure. When you run this command, Terraform reads the configuration files, creates an execution plan, and then executes that plan to provision, update, or delete the resources specified in the configuration. This command is typically used after running “terraform plan” to review the proposed changes and before making any modifications to your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform destroy&lt;/strong&gt;&lt;br&gt;
The “terraform destroy” command is used to destroy all the resources defined in your Terraform configuration. It deletes all the resources that Terraform manages, effectively tearing down your infrastructure. Use this command with caution as it cannot be undone and may result in the permanent loss of data or resources. Always verify the resources that will be destroyed before executing this command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;:&lt;br&gt;
Automating the setup of VPC peering connections with Terraform streamlines the process of configuring cloud networking infrastructure. By codifying infrastructure configurations, teams can easily replicate and manage VPC peering across different environments with consistency and reliability. As organizations embrace cloud-native architectures, Terraform serves as a valuable tool for simplifying complex networking tasks and accelerating the adoption of cloud technologies.&lt;/p&gt;

&lt;p&gt;Connect on LinkedIn &lt;a href="https://www.linkedin.com/in/prajwal-patil-334002296/"&gt;https://www.linkedin.com/in/prajwal-patil-334002296/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>sre</category>
      <category>cloud</category>
    </item>
    <item>
      <title>“Implementing AWS Recycle Bin Service for Enhanced Data Recovery”</title>
      <dc:creator>Prajwal Patil</dc:creator>
      <pubDate>Tue, 07 May 2024 18:52:52 +0000</pubDate>
      <link>https://dev.to/prajwal2023/implementing-aws-recycle-bin-service-for-enhanced-data-recovery-863</link>
      <guid>https://dev.to/prajwal2023/implementing-aws-recycle-bin-service-for-enhanced-data-recovery-863</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
In the fast-paced world of cloud computing, data loss can have significant consequences for businesses. Accidental deletions or unexpected failures can lead to critical data loss, resulting in downtime and potential financial losses. To mitigate these risks, AWS offers a powerful solution known as the Recycle Bin Service, which provides enhanced data recovery capabilities for AWS resources like Elastic Block Store (EBS) volumes.&lt;/p&gt;

&lt;p&gt;Implementation Steps:&lt;/p&gt;

&lt;p&gt;launch Instance&lt;/p&gt;

&lt;p&gt;EBS Information&lt;/p&gt;

&lt;p&gt;AWS Elastic Block Store (EBS) is a scalable block storage service provided by Amazon Web Services (AWS), offering persistent storage volumes for EC2 instances. Key features include block storage, data persistence, elasticity, snapshots, encryption, and high availability, making it ideal for various storage needs in the cloud.&lt;/p&gt;

&lt;p&gt;Taking Snapshot&lt;/p&gt;

&lt;p&gt;An AWS EBS snapshot is a point-in-time backup of an EBS volume stored in Amazon S3. It captures all the data on the volume at the time the snapshot is taken, including the data that is in use and any data that is pending writes. Snapshots are incremental, meaning that only the blocks on the volume that have changed since the last snapshot are saved, which reduces storage costs and improves efficiency. EBS snapshots are commonly used for data backup, disaster recovery, and creating new volumes from existing data.&lt;/p&gt;

&lt;p&gt;Create Snapshot&lt;/p&gt;

&lt;p&gt;Recycle bin&lt;/p&gt;

&lt;p&gt;The “Recycle Bin” service, in the context of AWS, is a concept often used to describe a safety mechanism for storing and recovering deleted resources. While AWS doesn’t offer a specific service named “Recycle Bin,” users can implement similar functionality using AWS services like AWS Lambda and Amazon S3.&lt;/p&gt;

&lt;p&gt;In this setup, when a resource is deleted, it is not immediately permanently removed. Instead, it is moved to a designated storage area, often called a “Recycle Bin” or “Trash,” where it can be retained for a certain period before final deletion. This provides a safety net, allowing users to recover accidentally deleted resources without having to resort to complex backup and recovery processes.&lt;/p&gt;

&lt;p&gt;By leveraging AWS Lambda functions and S3 event notifications, users can automate the process of moving deleted resources to the Recycle Bin and implement policies for retention and permanent deletion, thereby enhancing the overall data protection and management strategy within their AWS environment.&lt;/p&gt;

&lt;p&gt;Create retention rule&lt;/p&gt;

&lt;p&gt;retention setting&lt;/p&gt;

&lt;p&gt;Rule lock settings&lt;/p&gt;

&lt;p&gt;Rule created&lt;/p&gt;

&lt;p&gt;Now deleted the snapshot&lt;/p&gt;

&lt;p&gt;Deleted&lt;/p&gt;

&lt;p&gt;Open the recycle bin&lt;/p&gt;

&lt;p&gt;Click on the snapshot present in recycle bin&lt;/p&gt;

&lt;p&gt;Recovered&lt;/p&gt;

&lt;p&gt;Snapshot recovered successfully&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
The AWS Recycle Bin Service provides a robust mechanism for protecting critical data and simplifying data recovery processes in AWS environments. By implementing this service and following best practices, businesses can minimize the impact of data loss incidents and ensure continuous operations in the cloud. With its automated recovery workflow and comprehensive features, the Recycle Bin Service is a valuable tool for modern cloud-native architectures.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
