<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abhijeet Mohanan </title>
    <description>The latest articles on DEV Community by Abhijeet Mohanan  (@abhijeetmohanan).</description>
    <link>https://dev.to/abhijeetmohanan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abhijeetmohanan"/>
    <language>en</language>
    <item>
      <title>A Pod with Public IP</title>
      <dc:creator>Abhijeet Mohanan </dc:creator>
      <pubDate>Tue, 16 Sep 2025 18:44:31 +0000</pubDate>
      <link>https://dev.to/abhijeetmohanan/a-pod-with-public-ip-59m7</link>
      <guid>https://dev.to/abhijeetmohanan/a-pod-with-public-ip-59m7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Did you know? Pods can have public IP address and a dedicated security group&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Recently I was working on deploying telephony systems on AWS EKS, While doing so I realized that running media servers based on RTP (Real-time Transport Protocol) behind a NAT gateway is not a feasible option.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;p&gt;For RTP to work both the ends should be able to communicate directly over a Network (Internet / Intranet).&lt;br&gt;
The AWS NAT Gateway doesn't allow ingress traffic to flow in, making it not possible to handle real-time streaming in a private subnet. &lt;a href="https://voip-sip-sdk.com/p_7088-how-to-work-with-rtp-in-voip-sip-calls.html" rel="noopener noreferrer"&gt;image ref&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg42y9z48etle7vmykvfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg42y9z48etle7vmykvfj.png" alt="RTP Flow in a actual phone call " width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  An Easy Solution:
&lt;/h3&gt;

&lt;p&gt;An easy solution would be to make an EKS worker node public, open up the security groups, and voila – things would start working. &lt;/p&gt;
&lt;h4&gt;
  
  
  Security issues with this solution:
&lt;/h4&gt;

&lt;p&gt;However, this approach comes with a significant security flaw. RTP, by nature, requires a large range of ports to enable concurrent streaming. To achieve this, around 10,000 ports would need to be whitelisted in the security group. If higher concurrency is required, even more ports would be necessary.&lt;br&gt;
This scenario opens up ports used by NodePort, potentially granting unwanted access to your environment.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Ideal Solution:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The pod has a Public static IP&lt;/li&gt;
&lt;li&gt;The pod has a dedicated network interface &lt;/li&gt;
&lt;li&gt;The pod has a dedicated Security Group&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the above conditions are met it will make the environment a reliable and secure one. &lt;/p&gt;
&lt;h2&gt;
  
  
  How to achieve this is AWS EKS
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS EKS Cluster with AWS VPC-CNI&lt;/li&gt;
&lt;li&gt;Elastic IP&lt;/li&gt;
&lt;li&gt;Security Group&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step1: Enabling dedicated Network interface for pods.
&lt;/h3&gt;

&lt;p&gt;The first step is to configure &lt;a href="https://github.com/aws/amazon-vpc-cni-k8s" rel="noopener noreferrer"&gt;VPC-CNI&lt;/a&gt; to assign a dedicated network interface to a pod.&lt;/p&gt;

&lt;p&gt;To do so add the following variable to the aws-node daemonset&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ENABLE_POD_ENI": "true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using AWS Managed VPC-CNI add the following to the advanced configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ENABLE_POD_ENI"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this change is implemented the CNI (VPC-CNI) is ready to assign specific network interfaces to pods.&lt;br&gt;
By default nothing will change all the existing and newly created will run and behave as expected.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step2: Assigning Security Group to the pod
&lt;/h3&gt;

&lt;p&gt;AWS offers a CRD(custom resource definition) &lt;code&gt;vpcresources.k8s.aws/v1beta1&lt;/code&gt; using this a &lt;code&gt;SecurityGroupPolicy&lt;/code&gt; can be created as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;cat &amp;gt; securitygrouppolicy.yaml &amp;lt;&amp;lt;EOF&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vpcresources.k8s.aws/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecurityGroupPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;media-stream-sg-policy&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myns&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;media-servers&lt;/span&gt;
  &lt;span class="na"&gt;securityGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;groupIds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sg-e3edxxxxx&lt;/span&gt;
&lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The security group &lt;code&gt;sg-e3edxxxxx&lt;/code&gt; is expected to be existing in the VPC &lt;/p&gt;

&lt;p&gt;Apply the manifest&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; securitygrouppolicy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create a pod with label&lt;/strong&gt;  &lt;code&gt;app: media-servers&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;cat &amp;gt; pod.yaml &amp;lt;&amp;lt;EOF&lt;/span&gt;
&lt;span class="c1"&gt;# This is a sample pod &lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;media-servers&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;media-servers-pod&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myns&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;public.ecr.aws/docker/library/nginx:stable-alpine&lt;/span&gt;
&lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the manifest&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the pod is running state get the IP address of the pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# To get the IP address &lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; myns &lt;span class="nt"&gt;-o&lt;/span&gt; wide

&lt;span class="c"&gt;# output &lt;/span&gt;
&lt;span class="c"&gt;#NAME                  READY   STATUS              RESTARTS   AGE     IP             NODE                                         NOMINATED NODE   READINESS GATES&lt;/span&gt;
&lt;span class="c"&gt;#media-servers-pod     1/1     Running             0          5m57s   110.46.35.231  ip-110-46-4-191.eu-west-1.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the IP address and look it up on the AWS Console &lt;strong&gt;EC2 &amp;gt; Network &amp;amp; Security &amp;gt; Network interfaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This IP now will be associated to a dedicated Network Interface which also has the security group attached to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now you have a pod with a dedicated network interface and a security group attached to it&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note the interface ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step3: Assigning Elastic IP to the pod
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;By default a Public IP is not associated to the network interface as well as you need a set of static IP so that the external world can communicate with [ Helps with whitelisting ]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Goto AWS Console &lt;strong&gt;EC2 &amp;gt; Elastic IP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Allocate a new IP or use an existing one.&lt;/p&gt;

&lt;p&gt;Once the Elastic IP is allocated, Associate the IP address with the network interface obtained in step2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lets make it simpler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to achieve the binding easily&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;ALLOCATION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"eipalloc-xxxx"&lt;/span&gt; &lt;span class="c"&gt;# Get it from the Elastic IP Description&lt;/span&gt;
&lt;span class="nv"&gt;POD_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"110.46.35.231"&lt;/span&gt; 
&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;

aws ec2 associate-address &lt;span class="nt"&gt;--allocation-id&lt;/span&gt; &lt;span class="nv"&gt;$ALLOCATION_ID&lt;/span&gt; &lt;span class="nt"&gt;--network-interface-id&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;aws ec2 describe-network-interfaces  &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="s2"&gt;"Name=private-ip-address,Values=&lt;/span&gt;&lt;span class="nv"&gt;$POD_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"NetworkInterfaces[].NetworkInterfaceId"&lt;/span&gt;  &lt;span class="nt"&gt;--output&lt;/span&gt; text &lt;span class="nt"&gt;--no-paginate&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;--private-ip-address&lt;/span&gt; &lt;span class="nv"&gt;$POD_IP&lt;/span&gt; &lt;span class="nt"&gt;--allow-reassociation&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command can be run in an &lt;code&gt;init-container&lt;/code&gt; to make sure the IP is always bound to the pod.   &lt;/p&gt;

&lt;h2&gt;
  
  
  Documents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/security-groups-pods-deployment.html" rel="noopener noreferrer"&gt;Configure the Amazon VPC CNI plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/amazon-vpc-resource-controller-k8s/blob/master/config/crd/bases/vpcresources.k8s.aws_securitygrouppolicies.yaml" rel="noopener noreferrer"&gt;CRD - vpcresources.k8s.aws/v1beta1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-ec2-elastic-ip.html" rel="noopener noreferrer"&gt;Allocate and associate Elastic IP addresses&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>networking</category>
    </item>
    <item>
      <title>Deploying kubernetes on containers using kind</title>
      <dc:creator>Abhijeet Mohanan </dc:creator>
      <pubDate>Fri, 05 Feb 2021 11:15:30 +0000</pubDate>
      <link>https://dev.to/abhijeetmohanan/deploying-kubernetes-on-containers-using-kind-27l5</link>
      <guid>https://dev.to/abhijeetmohanan/deploying-kubernetes-on-containers-using-kind-27l5</guid>
      <description>&lt;p&gt;&lt;em&gt;Kind is an installation tool used to deploy kubernetes cluster in containers.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;We all need a simple, easy method to deploy kubernetes for testing and practice.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From the Documentation&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;kind is a tool for running local Kubernetes clusters using Docker container “nodes”.&lt;br&gt;
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  List of Contents
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Installing Docker&lt;/li&gt;
&lt;li&gt;Getting kind binary&lt;/li&gt;
&lt;li&gt;Getting kubectl binary&lt;/li&gt;
&lt;li&gt;Deploying a single node cluster &lt;/li&gt;
&lt;li&gt;Deploying Multi node Cluster&lt;/li&gt;
&lt;li&gt;Deploying a Multi Master&lt;/li&gt;
&lt;li&gt;Accessing Kubernetes Cluster&lt;/li&gt;
&lt;li&gt;Deploying and Accessing multiple clusters&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Let's Get Started!
&lt;/h4&gt;
&lt;h5&gt;
  
  
  For using kind you need to install docker
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo systemctl start docker &amp;amp;&amp;amp; sudo systemctl enable docker
sudo usermod -aG docker $USER
sudo systemctl restart docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;If you are facing an issue while installing docker please refer &lt;a href="https://docs.docker.com/engine/install/"&gt;Install Docker&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Log out and Log in once and fire the command &lt;code&gt;docker ps&lt;/code&gt; if the command is executed successfully docker is ready for use&lt;/p&gt;
&lt;h4&gt;
  
  
  Create a Directory
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir $HOME/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Add the following parameters to .bashrc at your home directory
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PATH=$PATH:$HOME/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Getting the kind binary
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.10.0/kind-linux-amd64
chmod +x kind
mv kind $HOME/bin/kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Log out and Log in once and fire the command below this should provide you with the current version of the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind --version 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Getting kubectl binary
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
mv kubectl $HOME/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log out and Log in once and fire the command below this should provide you with the current version of the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating cluster using kind:
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Deploying a single node cluster&lt;/strong&gt; &lt;br&gt;
&lt;code&gt;&amp;lt;your_cluster_name&amp;gt; replace with a name you wish&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --name &amp;lt;your_cluster_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;If you want to access your cluster Refer to Accessing Kubernetes Cluster below&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Deploying a multi node cluster&lt;/strong&gt;&lt;br&gt;
kind can be provided with a config file where the number nodes and their roles can be mentioned.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h6&gt;
  
  
  config.yaml
&lt;/h6&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fire the command below for deploying the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --config config.yaml --name &amp;lt;your_cluster_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A Multimaster kubernetes cluster can be deployed by editing the config.yaml file&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fire the command below for the deployment of Multimaster Kubernetes cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --config config.yaml --name &amp;lt;your_cluster_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Accessing The Kubernetes cluster
&lt;/h4&gt;

&lt;p&gt;The file &lt;code&gt;$HOME/.kube/config&lt;/code&gt; is created by default,do&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Deploying and accessing multiple clusters
&lt;/h4&gt;

&lt;p&gt;You can specify different names for different clusters&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --config config.yaml --name &amp;lt;your_cluster_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This clusters can be managed by using the configuration file that can be generated by using the command&lt;br&gt;
&lt;code&gt;&amp;lt;Cluster_config_filename&amp;gt; replace this parameter with a filename&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind get clusters \\ This will list all the clusters that are running

kind get kubeconfig --name &amp;lt;your_cluster_name&amp;gt; &amp;gt;&amp;gt; &amp;lt;Cluster_config_filename&amp;gt;

export KUBECONFIG=&amp;lt;Cluster_config_filename&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your kubectl is configured to use the specific cluster&lt;/p&gt;

&lt;p&gt;There are various configurations that can be done while deploying a kubernetes cluster &lt;/p&gt;

&lt;p&gt;A sample configuration file is given below &lt;br&gt;
you can use these configuration to your advantage&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
  # any feature gate can be enabled here with "Name": true
  # or disabled here with "Name": false
  "CSIMigration": true
runtimeConfig:
  "api/alpha": "false"
networking:
  # network configuration for nodes 
  ipFamily: ipv6
  apiServerAddress: "127.0.0.1"
  apiServerPort: 6443
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
  disableDefaultCNI: true
  kubeProxyMode: "ipvs"
nodes:
# one node hosting a control plane
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "my-label=true"
- role: worker
  extraMounts:
  # with the extraMounts parameter one can attach persistent volume to the node 
  - hostPath: /path/to/my/files/
    containerPath: /files
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    # optional: set the bind address on the host
    # 0.0.0.0 is the current default
    listenAddress: "127.0.0.1"
    # optional: set the protocol to one of TCP, UDP, SCTP.
    # TCP is the default
    protocol: TCP
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "my-label2=true"
- role: worker
  image: kindest/node:v1.16.4@sha256:b91a2c2317a000f3a783489dfb755064177dbc3a0b2f4147d50f04825d016f55
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The deployment gets ready in much easier and faster way you can save the docker image and copy it to your local machine for further use.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>testing</category>
      <category>kind</category>
    </item>
  </channel>
</rss>
