<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Soram Varma</title>
    <description>The latest articles on DEV Community by Soram Varma (@soram).</description>
    <link>https://dev.to/soram</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/soram"/>
    <language>en</language>
    <item>
      <title>"Argo CD: A GitOps Tool for Kubernetes Continuous Deployment"</title>
      <dc:creator>Soram Varma</dc:creator>
      <pubDate>Sat, 22 Mar 2025 14:56:56 +0000</pubDate>
      <link>https://dev.to/soram/argo-cd-a-gitops-tool-for-kubernetes-continuous-deployment-3d2k</link>
      <guid>https://dev.to/soram/argo-cd-a-gitops-tool-for-kubernetes-continuous-deployment-3d2k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In modern DevOps practices, ensuring seamless and automated deployments for Kubernetes applications is crucial. Traditional CI/CD tools like Jenkins often require extensive scripting and lack real-time synchronization with Kubernetes clusters. This is where Argo CD comes in—a GitOps-based continuous deployment tool designed specifically for Kubernetes. In this blog, we will explore what Argo CD is, its architecture, installation process, and why it stands out as a better choice for Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is GitOps?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is necessary to understand what GitOps is first:&lt;br&gt;
GitOps is an operational framework that uses Git repositories as the single source of truth for managing infrastructure and application deployment. This approach enables version control, easy rollbacks, and automated synchronization of the actual state with the desired state.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Here, a single source of truth means that any change required for application deployment or infrastructure management must originate from Git. Direct modifications in Kubernetes, Terraform, or other platforms are not allowed, and this ultimately makes it secure and consistent.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What is Argo CD?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Argo CD is a continuous deployment tool for Kubernetes that follows the GitOps approach. It works on a pull-based mechanism, ensuring that the actual state of applications matches the desired state defined in a Git repository. If any discrepancies occur—whether due to manual changes in Kubernetes or updates in Git—Argo CD detects them. However, manual modifications in Kubernetes are not permitted, as they result in an OutOfSync state. Argo CD continuously monitors the Git repository and only applies changes when new updates are detected, ensuring automated synchronization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of Argo CD&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declarative, GitOps-based deployment&lt;/li&gt;
&lt;li&gt;Versioned and Immutable&lt;/li&gt;
&lt;li&gt;Automated synchronization of desired and actual state&lt;/li&gt;
&lt;li&gt;UI and CLI for better accessibility&lt;/li&gt;
&lt;li&gt;Role-Based Access Control (RBAC)&lt;/li&gt;
&lt;li&gt;Self-healing capabilities&lt;/li&gt;
&lt;li&gt;Continuous Observation/Monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Argo CD Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API server:&lt;/strong&gt; It is gRPC server(a high performance Remote Procedure Call framework which uses HTTP/2 for efficient communication) and is the interface which allows users to interact with Argo cd by UI/CLI.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Its responsibilities are:&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Application management and status report&lt;/li&gt;
&lt;li&gt;  Operations like rollback, sync, etc&lt;/li&gt;
&lt;li&gt;  Credential management of repositories and clusters&lt;/li&gt;
&lt;li&gt;  Allows RBAC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Repo server:&lt;/strong&gt; This service keeps the local cache of the git repositories that have manifest files and this gets generated and sent to K8s when provided inputs in Argo cd like repo URL, its path(filename), and values if using helm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application controller:&lt;/strong&gt; This service controls Kubernetes which continuously monitors the running application and compares the current state against the desired state. If it's different in state or changes made from the K8s side manually it results in OutOfSync and it reverts back to the desired state on its own which was originally defined in the git repo. This step concludes that ArgoCD has the self-healing ability.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Responsibilities:&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detects any configuration drift between the actual and desired state&lt;/li&gt;
&lt;li&gt;Automatically synchronizes changes when discrepancies occur&lt;/li&gt;
&lt;li&gt;Ensures self-healing by reverting unauthorized manual changes in Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installing Argo CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Argo CD is installed directly in a Kubernetes cluster and can be accessed via both UI and CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Install Argo CD on Kubernetes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates an argocd namespace and installs Argo CD components from the official GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Expose the Argo CD Service&lt;/strong&gt;&lt;br&gt;
By default, Argo CD runs with a ClusterIP service. To access it externally, change it to NodePort:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit service/argocd-server -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modify the type field from ClusterIP to NodePort and save the changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Retrieve Argo CD Admin Password&lt;/strong&gt;&lt;br&gt;
Argo CD’s default username is admin, and the password is stored in a Kubernetes secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use this password to log in via the web UI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. To Install and Access Argo CD via CLI&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To log in via CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;argocd login &amp;lt;server-host&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Retrieve the initial admin password using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;argocd admin initial-password -n argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why Use Argo CD Over Jenkins?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Jenkins is widely used for CI/CD, but when it comes to Kubernetes deployments, it has limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jenkins requires scripts for automation, whereas Argo CD follows a declarative approach.&lt;/li&gt;
&lt;li&gt;Jenkins does not monitor Kubernetes clusters for state changes, whereas Argo CD continuously syncs the actual and desired states.&lt;/li&gt;
&lt;li&gt;Argo CD provides a UI and CLI specifically designed for Kubernetes deployment workflows.&lt;/li&gt;
&lt;li&gt;Argo CD supports Helm, and native YAML, making it flexible for various deployment strategies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Alternatives to Argo CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another popular GitOps tool is Flux CD, which also follows a pull-based deployment model. The choice between Argo CD and Flux CD depends on specific use cases:&lt;br&gt;
Argo CD is better for teams requiring a UI, RBAC, and advanced deployment strategies.&lt;br&gt;
Flux CD is lightweight and integrates well with Kubernetes-native tooling like Helm Operator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Argo CD simplifies Kubernetes application deployment by automating synchronization with Git repositories. Its pull-based mechanism ensures self-healing capabilities, reducing the chances of manual errors. Compared to traditional CI/CD tools like Jenkins, Argo CD provides a more Kubernetes-native approach to continuous deployment.&lt;/p&gt;

&lt;p&gt;If you are managing Kubernetes applications and want a robust, automated GitOps workflow, Argo CD is a powerful tool worth integrating into your DevOps pipeline. &lt;br&gt;
Happy CD! 🚀&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>argocd</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Understanding Multi-Stage Dockerfile: A Guide for Efficient Containerization</title>
      <dc:creator>Soram Varma</dc:creator>
      <pubDate>Mon, 10 Mar 2025 08:21:25 +0000</pubDate>
      <link>https://dev.to/soram/understanding-multi-stage-dockerfile-a-guide-for-efficient-containerization-1f5b</link>
      <guid>https://dev.to/soram/understanding-multi-stage-dockerfile-a-guide-for-efficient-containerization-1f5b</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker has revolutionized the way applications are deployed by providing a containerized environment that ensures consistency across development, testing, and production. A key aspect of Docker is the Dockerfile, which is used to define and build custom container images based on specific requirements.&lt;br&gt;
However, when dealing with large applications, traditional Dockerfiles can result in unnecessarily large images, consuming excessive storage and making deployments less efficient. This is where Multi-Stage Dockerfiles&lt;br&gt;
come into play, helping to optimize the image size and improve security by only including what is necessary for the final deployment.&lt;/p&gt;

&lt;p&gt;In this blog, we will explore Multi-Stage Dockerfiles, their benefits, and how they enhance the deployment process. We’ll also walk through an example using Maven and TomEE to build and deploy a Java-based web application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Multi-Stage Dockerfile?&lt;/strong&gt;&lt;br&gt;
A Multi-Stage Dockerfile is an advanced feature of Docker that allows multiple stages within a single Dockerfile. Each stage can use a different base image, and only the necessary artifacts from the previous stages are copied to the final image. This significantly reduces the final image size and enhances security by eliminating unnecessary dependencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits of a Multi-Stage Dockerfile:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimized Image Size: Reduces storage usage by only including essential artifacts.&lt;/li&gt;
&lt;li&gt;Improved Security: The final image is free from unnecessary build tools, reducing potential attack vectors.&lt;/li&gt;
&lt;li&gt;Simplified Workflow: Eliminates the need for separate Dockerfiles for building and running applications.&lt;/li&gt;
&lt;li&gt;Better Maintainability: Keeps Dockerfiles clean and easier to manage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Multi-Stage Dockerfile Example: Java-Based Application&lt;/strong&gt;&lt;br&gt;
Below is the overview of the Multi-stage Dockerfile&lt;br&gt;
Let's break down this Multi-Stage Dockerfile for a Java-based application that uses Maven for building and TomEE for deployment as a base image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build Stage
FROM maven: latest AS build
WORKDIR /app
COPY . .
RUN mvn clean package

# Deployment Stage
FROM tomee:latest  # Includes Java and Tomcat (TomEE)
WORKDIR /app
COPY --from=build /app/target/*.war /usr/local/tomee/webapps/ROOT.war
EXPOSE 8080
CMD ["catalina.sh", "run"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stage 1: Build Stage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base Image: maven:latest – Includes Maven to compile the Java project.&lt;/li&gt;
&lt;li&gt;Working Directory: /app&lt;/li&gt;
&lt;li&gt;COPY Instruction: Copies all source code and the pom.xml file from the local machine to the container.&lt;/li&gt;
&lt;li&gt;RUN Instruction: Executes mvn clean package, which compiles the application and creates a .war file in the target directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stage 2: Deployment Stage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base Image: tomee:latest – Includes Java and TomEE.&lt;/li&gt;
&lt;li&gt;Working Directory: /app&lt;/li&gt;
&lt;li&gt;COPY --from=build: Extracts only the .war file from the previous stage and moves it to TomEE's deployment directory (/usr/local/tomee/webapps/ROOT.war).&lt;/li&gt;
&lt;li&gt;EXPOSE 8080: Opens port 8080 to allow access to the application.&lt;/li&gt;
&lt;li&gt;CMD Instruction: Starts the TomEE server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Use &lt;code&gt;ROOT.war&lt;/code&gt;?&lt;/strong&gt;&lt;br&gt;
By default, a .war file deployed inside webapps/ is accessible via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:8080/&amp;lt;context_path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, when named ROOT.war, the application is deployed as the root, making it accessible directly at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:8080/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This eliminates the need to specify the context path in the URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use a Multi-Stage Dockerfile?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A traditional approach would involve creating separate images for the build and deployment, but this can lead to large image sizes and unnecessary complexity. Here’s why using a Multi-Stage Dockerfile is beneficial:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;Reduces Image Size&lt;/em&gt;:
A standard image containing Maven and build dependencies can be several gigabytes in size. With a Multi-Stage Dockerfile, only the essential .war file is included in the final image, significantly reducing its size.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Enhances Security&lt;/em&gt;:
By removing unnecessary build tools from the final image, we minimize the attack surface, making the containerized application more secure.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Improves Deployment Efficiency&lt;/em&gt;:
Since the final image only contains runtime dependencies, it results in faster deployment times and efficient resource utilization.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Encourages Best Practices&lt;/em&gt;:
Keeping build and runtime environments separate aligns with industry best practices, ensuring better maintainability and scalability.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multi-Stage Dockerfiles are an essential feature for optimizing Docker images, reducing storage requirements, and improving security. By carefully structuring build and deployment stages, you can streamline the development workflow and deploy lightweight, efficient containers.&lt;/p&gt;

&lt;p&gt;In this guide, we explored a practical Java-based example using Maven and TomEE, highlighting the benefits of this approach. Whether you're developing microservices, monolithic applications, or frontend-heavy web apps, adopting Multi-Stage Dockerfiles will greatly enhance your containerization strategy.&lt;/p&gt;

&lt;p&gt;Happy Containerizing!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockerfile</category>
      <category>devops</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>How can I ping the main machine on which I just launched my private instance?</title>
      <dc:creator>Soram Varma</dc:creator>
      <pubDate>Sun, 16 Feb 2025 08:09:29 +0000</pubDate>
      <link>https://dev.to/soram/how-can-i-ping-the-main-machine-on-which-i-just-launched-my-private-instance-2a92</link>
      <guid>https://dev.to/soram/how-can-i-ping-the-main-machine-on-which-i-just-launched-my-private-instance-2a92</guid>
      <description>&lt;p&gt;When working with cloud infrastructure, it's common to have private instances that don't have direct internet access. These instances often rely on a NAT (Network Address Translation) gateway to access the internet for updates or external communications. However, this setup can make it challenging to ping the main machine (or bastion host) from the private instance, or vice versa, due to the lack of a direct route.&lt;/p&gt;

&lt;p&gt;In this blog, we'll explore how you can ping the main machine from a private instance that has no internet connection and uses a NAT gateway for outbound traffic. We'll also discuss why this setup behaves the way it does and provide a step-by-step guide to achieve the desired connectivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Can't You Ping the Main Machine?&lt;/strong&gt;&lt;br&gt;
In a typical cloud setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Private Instances: These instances are placed in a private subnet and do not have public IP addresses. They rely on a &lt;u&gt;NAT gateway to access the internet for outbound traffic&lt;/u&gt;.&lt;/li&gt;
&lt;li&gt;NAT Gateway: The NAT gateway allows private instances to initiate outbound connections to the internet but does not allow inbound connections from the internet or other instances.&lt;/li&gt;
&lt;li&gt;Main Machine (Bastion Host): This is usually a public-facing instance that acts as a gateway to access private instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The inability to ping the main machine from the private instance (or vice versa) is due to the following reasons:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Private instances lack public IP addresses, so they cannot be directly accessed from the internet.&lt;/li&gt;
&lt;li&gt;The NAT gateway only facilitates outbound traffic and does not route inbound traffic to private instances.&lt;/li&gt;
&lt;li&gt;Security groups and network ACLs (Access Control Lists) may block ICMP (ping) traffic between the instances.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide to Ping the Main Machine&lt;/strong&gt;&lt;br&gt;
To enable pinging between the main machine and the private instance, follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Set Up a Bastion Host&lt;/strong&gt;&lt;br&gt;
Ensure you have a bastion host (main machine) in a public subnet with a public IP address.&lt;br&gt;
Configure the security group of the bastion host to allow SSH access (port 22) from your IP address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Configure Security Groups&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;For the private instance:&lt;/code&gt;&lt;br&gt;
Allow inbound ICMP (ping) traffic from the bastion host's private IP address.&lt;br&gt;
Allow inbound SSH traffic from the bastion host's private IP address.&lt;br&gt;
&lt;code&gt;For the bastion host:&lt;/code&gt;&lt;br&gt;
Allow outbound ICMP traffic to the private instance's private IP address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Use SSH Tunneling&lt;/strong&gt;&lt;br&gt;
Since the private instance cannot be directly accessed, you can use the bastion host as a jump server to establish a connection.&lt;/p&gt;

&lt;p&gt;SSH into the bastion host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i &amp;lt;your-key.pem&amp;gt; user@&amp;lt;bastion-public-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the bastion host, SSH into the private instance using its private IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i &amp;lt;your-key.pem&amp;gt; user@&amp;lt;private-instance-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Ping the Main Machine&lt;/strong&gt;&lt;br&gt;
Once you're inside the private instance, you can ping the bastion host's private IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ping &amp;lt;bastion-private-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the security groups and network ACLs are configured correctly, the ping should succeed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Ping the Private Instance from the Main Machine&lt;/strong&gt;&lt;br&gt;
To ping the private instance from the bastion host, use the private IP address of the private instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ping &amp;lt;private-instance-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Troubleshooting Tips&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check Security Groups: Ensure that the security groups for both the bastion host and the private instance allow ICMP traffic.&lt;/li&gt;
&lt;li&gt;Verify Network ACLs: Ensure that the network ACLs for the subnets allow inbound and outbound ICMP traffic.&lt;/li&gt;
&lt;li&gt;Private IP Addresses: Always use private IP addresses for communication within the VPC (Virtual Private Cloud).&lt;/li&gt;
&lt;li&gt;NAT Gateway Configuration: &lt;u&gt;Remember that the NAT gateway only facilitates outbound traffic and does not help with inbound connectivity.&lt;/u&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Pinging the main machine from a private instance (or vice versa) in a cloud environment with a NAT gateway requires careful configuration of security groups, network ACLs, and SSH tunneling. By following the steps outlined in this blog, you can establish the necessary connectivity and troubleshoot any issues that arise.&lt;/p&gt;

&lt;p&gt;This setup is particularly useful for managing private instances in a secure and controlled manner, ensuring that your infrastructure remains protected while still allowing necessary communications.&lt;/p&gt;

&lt;p&gt;Note: Whether you're working with Linux or Windows-based servers, the process remains the same. For Windows servers, additional attention to the Windows Firewall is required, but the core steps are identical.&lt;/p&gt;

&lt;p&gt;Happy networking! 🚀&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>cloudcomputing</category>
      <category>devops</category>
    </item>
    <item>
      <title>Installing Kubernetes using kubeadm without installing docker in Ubuntu</title>
      <dc:creator>Soram Varma</dc:creator>
      <pubDate>Thu, 06 Feb 2025 14:07:31 +0000</pubDate>
      <link>https://dev.to/soram/installing-kubernetes-using-kubeadm-without-installing-docker-in-ubuntu-19bo</link>
      <guid>https://dev.to/soram/installing-kubernetes-using-kubeadm-without-installing-docker-in-ubuntu-19bo</guid>
      <description>&lt;p&gt;First of all, what is Kubernetes?&lt;br&gt;
Kubernetes (K8s) is an orchestration tool that automates the deployment, scaling, and management of containerized applications. It provides key features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scaling&lt;/strong&gt;: Automatically scales applications up or down based on demand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Updating:&lt;/strong&gt; Manages rolling updates and rollbacks to ensure seamless application updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability:&lt;/strong&gt; Ensures applications run without downtime by distributing containers across multiple nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Balancing:&lt;/strong&gt; Distributes traffic efficiently to maintain performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Healing:&lt;/strong&gt; Detects and replaces failed containers automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But to happen this thing initial step is &lt;em&gt;to install it&lt;/em&gt;&lt;br&gt;
Below are the steps to achieve it. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;#Installing Kubernetes using kubeadm in Ubuntu:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo apt-get update&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;#Enable IPv4 packet forwarding&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;#sysctl params required by setup, params persist across reboots&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;# Apply sysctl params without reboot&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo sysctl --system&lt;/code&gt;&lt;br&gt;
*&lt;em&gt;#verify it by command as the output should be 1: *&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo sysctl net.ipv4.ip_forward&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#(Althoug it present but if not available) make directory with permission if not present you can check by going to this directory /etc/apt/keyrings&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;_#sudo mkdir -p -m 755 /etc/apt/keyrings_&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;#Add the Kubernetes repository&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;#Execute the below command after creating kubernetes repo&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#As installing k8s version v1.31 no need to install Docker as in k8s instead installing container runtime environment(cri-o, containerd, Docker Engine) it provides the required docker usage stuff&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;#Add the CRI-O repository:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;#Then run the created repo:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/v1.31/deb/ /" | sudo tee /etc/apt/sources.list.d/cri-o.list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;#Execute the following command&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo apt-get update&lt;/code&gt; &lt;br&gt;
&lt;code&gt;sudo apt-get install -y cri-o kubelet kubeadm kubectl&lt;/code&gt; &lt;strong&gt;#Install the packages&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl start crio.service&lt;/code&gt;   &lt;strong&gt;#This will start the cri-o service&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo swapoff -a&lt;/code&gt;  &lt;strong&gt;#as this makes the kubelet not to start&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo modprobe br_netfilter&lt;/code&gt;  &lt;strong&gt;#Bootstrap a cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo kubeadm init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#Below command gets provided by running &lt;em&gt;kubeadm init&lt;/em&gt; which need to be performed&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#Create cni network&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/calico.yaml -O&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f calico.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#last will copy and paste the kubeadm join command into the desired node which will be like given below **&lt;br&gt;
**#example:&lt;/strong&gt; &lt;br&gt;
&lt;code&gt;sudo sudo kubeadm join 192.168.46.157:6443 --token7xlnp0.9uv4z0qr4wvzhtqn \ --discovery-token-ca-cert-hash sha256:4a1a412d2e682556df0bf10dc380c744a98eb99e8c927fa58eb025d5ff7dc694&lt;/code&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kubeadm</category>
      <category>contianer</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
