<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: A-LPHARM</title>
    <description>The latest articles on DEV Community by A-LPHARM (@alpharm).</description>
    <link>https://dev.to/alpharm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alpharm"/>
    <language>en</language>
    <item>
      <title>Exploring Cilium Network Integration with AWS EKS</title>
      <dc:creator>A-LPHARM</dc:creator>
      <pubDate>Fri, 13 Sep 2024 01:23:18 +0000</pubDate>
      <link>https://dev.to/alpharm/exploring-cilium-network-integration-with-aws-eks-4apn</link>
      <guid>https://dev.to/alpharm/exploring-cilium-network-integration-with-aws-eks-4apn</guid>
      <description>&lt;p&gt;Today, I explore how the Cilium network works by integrating it into AWS EKS, which has been quite intriguing. Creating and managing clusters with Cilium improves network connectivity, acting as a network superhero. We can leverage the add-on modules provided by EKS.&lt;br&gt;
   In this blog post, we will explore and test how to integrate the Cilium networking add-on directly with EKS. In the next blog, we will dive further into the newer version of AWS EKS cluster creation flexibility, which promises to simplify this integration.&lt;br&gt;
   Every EKS cluster comes with default networking add-ons, including AWS VPC CNI, CoreDNS, and kube-proxy, to enable pod and service operations in the EKS clusters. In our cluster deployment, we will follow the documentation where the taints provided control the scheduling of application pods to nodes based on the readiness status of Cilium. As of this writing, this is limited to IPv4.&lt;br&gt;
Here is an example of a cluster configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: henry-eks
  region: us-east-1

managedNodeGroups:
- name: ng-1
  desiredCapacity: 2
  privateNetworking: true
  # taint nodes so that application pods are
  # not scheduled/executed until Cilium is deployed.
  # Alternatively, see the note below.
  taints:
   - key: "node.cilium.io/agent-not-ready"
     value: "true"
     effect: "NoExecute"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the cluster is created the networking add-on AWS VPC CNI plugin is responsible for setting up the virtual network devices as well as for IP address management via ENI. once the cilium CNI plugin is set up it attaches the eBPF programs to the network devices set up by the AWS VPC CNI plugin in other to enforce network policies, perform load-balancing and encryption.&lt;/p&gt;

&lt;p&gt;To confirm the AWS VPC CNI the version 1.16.0 you are using to guarantee compatible with cilium&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl -n kube-system get ds/aws-node -o json | jq -r '.spec.template.spec.containers[0].image'&lt;br&gt;
602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.16.0-eksbuild.1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Before we go deeper into installing Cilium in the EKS cluster, I'll discuss another feature, AWS ENI (Elastic Network Interface). It is an allocator and a virtual network interface that we can attach to any nodes on our cluster, necessary for allocating IP addresses needed by communicating with the EC2 instance API. Once Cilium is set up, each node creates a Cilium CRD matching the node name &lt;code&gt;ciliumnodes.cilium.io&lt;/code&gt;, which also creates the ENI parameters by communicating with the EC2 metadata API to retrieve the instance ID and VPC information.&lt;/p&gt;

&lt;p&gt;now cilium will manage the AWS ENI instead of the VPC CNI, so the aws-node daemonset must be patched to prevent conflicting behavior:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n kube-system patch daemonset aws-node --type='strategic' -p='{"spec":{"template":{"spec":{"nodeSelector":{"io.cilium/aws-node-enabled":"true"}}}}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;once it has been patched&lt;/p&gt;

&lt;p&gt;then we work on installing cillium on the Eks cluster, using helm&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add cilium https://helm.cilium.io/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; then after updating and installing the cilium repo on the local machine, we then use helm install repo&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install cilium cilium/cilium --version 1.15.6 \
  --namespace kube-system \
  --set eni.enabled=true \
  --set ipam.mode=eni \
  --set egressMasqueradeInterfaces=eth0 \
  --set routingMode=native

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you created your cluster and did not taint the nodes with &lt;code&gt;node.cilium.io/agent-not-ready&lt;/code&gt;, the unmanaged pods need to be restarted manually to ensure Cilium starts managing them. To do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '&amp;lt;none&amp;gt;' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;validate the installation &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;here, I tried something different instead of using the documentation, I deploy a Star Wars microservices application, which has three applications: Deathstar, tiefighter, xwing. the Deathstar runs on port 80 as the cluster IP, the Deathstar service provides the landing services to the application as a whole or spaceship. the tie fighter pod and xwing pod represent similar client service, moving on. to run this,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.15.6/examples/minikube/http-sw-app.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;cilium will represent every pod as an endpoint in the cilium agent. we can get the list of end-point by running this command&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# the first node  
kubectl -n kube-system exec cilium-cxvdh -- cilium-dbg endpoint list
# the second node 
kubectl -n kube-system exec cilium-lsxht -- cilium-dbg endpoint list

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;this will list all the end-point for each node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n kube-system get pods -l k8s-app=cilium
NAME           READY   STATUS    RESTARTS   AGE
cilium-cxvdh   1/1     Running   0          119m
cilium-lsxht   1/1     Running   0          119m

$ kubectl -n kube-system exec cilium-lsxht -- cilium-dbg endpoint list
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6   IPv4             STATUS
           ENFORCEMENT        ENFORCEMENT
143        Disabled           Disabled          54154      k8s:app.kubernetes.io/name=tiefighter                                           192.168.92.83    ready 

                                                           k8s:class=tiefighter

                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default

                                                           k8s:io.cilium.k8s.policy.cluster=default

                                                           k8s:io.cilium.k8s.policy.serviceaccount=default

                                                           k8s:io.kubernetes.pod.namespace=default

                                                           k8s:org=empire

147        Disabled           Disabled          4849       k8s:app.kubernetes.io/name=xwing                                                192.168.85.248   ready 

                                                           k8s:class=xwing

                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default

                                                           k8s:io.cilium.k8s.policy.cluster=default

                                                           k8s:io.cilium.k8s.policy.serviceaccount=default

                                                           k8s:io.kubernetes.pod.namespace=default

                                                           k8s:org=alliance

735        Disabled           Disabled          4          reserved:health                                                                 192.168.75.160   ready 

1832       Disabled           Disabled          1          k8s:alpha.eksctl.io/cluster-name=henry-eks-app                                                   ready 

                                                           k8s:alpha.eksctl.io/nodegroup-name=henry-eks-1

                                                           k8s:eks.amazonaws.com/capacityType=ON_DEMAND

                                                           k8s:eks.amazonaws.com/nodegroup-image=ami-057ddb600f3bba07e

                                                           k8s:eks.amazonaws.com/nodegroup=henry-eks-1

                                                           k8s:eks.amazonaws.com/sourceLaunchTemplateId=lt-07908f6a5332b214e

                                                           k8s:eks.amazonaws.com/sourceLaunchTemplateVersion=1

                                                           k8s:node.kubernetes.io/instance-type=t2.medium

                                                           k8s:topology.kubernetes.io/region=us-east-1

                                                           k8s:topology.kubernetes.io/zone=us-east-1b

                                                           reserved:host

2579       Disabled           Disabled          6983       k8s:app.kubernetes.io/name=deathstar                                            192.168.79.49    ready 

                                                           k8s:class=deathstar

                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default

                                                           k8s:io.cilium.k8s.policy.cluster=default

                                                           k8s:io.cilium.k8s.policy.serviceaccount=default

                                                           k8s:io.kubernetes.pod.namespace=default

                                                           k8s:org=empire

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm the pods can be accessed and the labels with &lt;code&gt;org=empire&lt;/code&gt; are allowed to connect and request landing since no rules will attached to the pods xwing and tiefighter&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;thank you &lt;/p&gt;

&lt;p&gt;till next time&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding the Extension of Kubernetes APIs with Custom Resource Definition</title>
      <dc:creator>A-LPHARM</dc:creator>
      <pubDate>Wed, 31 Jul 2024 19:48:23 +0000</pubDate>
      <link>https://dev.to/alpharm/understanding-the-extension-of-kubernetes-apis-with-custom-resource-definition-3k8</link>
      <guid>https://dev.to/alpharm/understanding-the-extension-of-kubernetes-apis-with-custom-resource-definition-3k8</guid>
      <description>&lt;p&gt;Kubernetes has always been evolving, and this progress provides more flexibility and extensibility to developers and operations teams. one feature I came across is Custom Resource Definition (CRDs). this allows users to create their own new resource type within Kubernetes extending its API, this flexibility is key in an ecosystem as diverse as that of cloud-native applications. Developers can create a custom resource with all the rules just like your built-in Kubernetes resources such as Pods, Rbac, Replicasets, API Services, Then your Kubernetes API server leverages all the custom resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs04m1z45prbhbe4pxup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs04m1z45prbhbe4pxup.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
** What are Custom Resource Definition (CRD)?**&lt;br&gt;
   A &lt;code&gt;resource&lt;/code&gt; is an endpoint in the Kubernetes API that stores a collection of API objects of a certain Kind and &lt;code&gt;custom resource&lt;/code&gt; is an extension of the Kubernetes API that is not necessarily available in a default Kubernetes installation.&lt;br&gt;
&lt;code&gt;Custom Resource Definition&lt;/code&gt; is a kubernetes resource that allows you to define &lt;code&gt;custom resource&lt;/code&gt; in the Kubernetes API.&lt;br&gt;
&lt;strong&gt;What happens when you Create a Custom Resource Definition?&lt;/strong&gt;&lt;br&gt;
  when you create the CRD definition kubernetes validates it against the schema defined in CRD. this includes checking the &lt;code&gt;apiVersion&lt;/code&gt;, &lt;code&gt;kind&lt;/code&gt;, &lt;code&gt;metadata&lt;/code&gt;, spec and all rules specified in the &lt;code&gt;openAPIV3Schema&lt;/code&gt; once its validated it is accepted and stored in the cluster etcd, the etcd storage ensures that the state of your custom resources is persistent and can be retrieved and managed across the cluster.&lt;br&gt;
then the &lt;code&gt;API server&lt;/code&gt; creates a new RESTful API endpoint for your custom resource. example if you create a CRD for a custom resource called &lt;code&gt;pdfdocs&lt;/code&gt; in the &lt;code&gt;alpharm.henry.com&lt;/code&gt; API group, the API server will create endpoints suh as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/apis/alpharm.com/v1/pdfdocs&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/apis/alpharm.com/v1/namespaces/{namespace}/henrycustomresources/{name}&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;these endpoints can be used to perform CRUD operations.&lt;br&gt;
 The resources are then monitored by controllers that are responsible for taking the information in the resource, once you have a custom controller or operator it manages the custom resource and then take appropriate to ensure that the desired state of your custom resources is maintained.&lt;/p&gt;

&lt;h2&gt;
  
  
  features to consider when CRD is created 
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Version: &lt;code&gt;v1&lt;/code&gt; it identifies the version of the custom resource.&lt;/li&gt;
&lt;li&gt;Kind: &lt;code&gt;pdfdoc&lt;/code&gt; this is typically the name of the custom resource.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;spec.names&lt;/code&gt; defines how to describe your custom resource.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata.name&lt;/code&gt; this is the name of the CRD&lt;/li&gt;
&lt;li&gt;Group: &lt;code&gt;alpharm.henry.com&lt;/code&gt; it is usually attached to the crd resource name&lt;/li&gt;
&lt;li&gt;plural: &lt;code&gt;pdfdocs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Scope: namespaced this determines if the CR can be created in a namespace or globally. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Spec.names.kind&lt;/code&gt;: this determines the kind of CR to be created using CRD like deployment, cronjob.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Creating the Custom Resource Definition (CRD)&lt;/strong&gt;&lt;br&gt;
create a file with crd_manifest.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: pdfdocs.alpharm.henry.com   #plural_name.the_group_name
spec:
  group: alpharm.henry.com
  names:
    kind: pdfdoc
    plural: pdfdocs
    shortNames:
    - pd
  scope: Namespaced  # this defines where the custom resources instances will be created, namespaced or globally
  versions:
    - name: v1
      served: true  #this determines if this version v1 is used and will be served, if false no one uses it
      storage: true  #this dertermines if this version will be stored in etcd
      schema:
        openAPIV3Schema:  #lets create our resource here
          type: object
          description: 'APIVersion defines the versioned schema of this representation
            of an object. Servers should convert recognized schemas to the latest
            internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
          properties:
            apiVersion:   #our typical kubernetes resource format apiversion, kind, metadata, spec
              type: string
            kind:
              type: string
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
            metadata:
              type: object
            spec:
              type: object
              properties:
                documentName:
                  type: string  #uses array, boolean, integer, number, object, string
                text:
                  type: string


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then create the Custom Resource Definitions &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl apply -f crd.yaml 
customresourcedefinition.apiextensions.k8s.io/pdfdocs.alpharm.henry.com created
$ kubectl get crd
NAME                        CREATED AT
pdfdocs.alpharm.henry.com   2024-07-30T12:37:28Z


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;when we run this to get the documents &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl get pdfdocs
No resources found in default namespace.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;this signify that the &lt;code&gt;pdf&lt;/code&gt; is stored but there aren't any documents available in the pdfdocs&lt;/p&gt;

&lt;p&gt;lets create a resource with our cluster resource definition&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: alpharm.henry.com/v1
kind: pdfdoc
metadata:
  name: my-document
spec: 
  documentName: build
  text: |
   we are creating a document to test our first crd
   so this can be fun to try out


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then we create the resource &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl apply -f test-manifest.yaml
pdfdoc.alpharm.henry.com/my-document created



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wfn2p8gnxwv4v0692zt.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wfn2p8gnxwv4v0692zt.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;so we can view it &lt;br&gt;
   Custom resources consume storage space in the same way that &lt;code&gt;ConfigMaps&lt;/code&gt; do. Creating too many custom resources may overload your API server's storage space.&lt;br&gt;
Now lets create Your Custom Controllers&lt;br&gt;
Lets set up a Go environment for the controller create a directory and initalize a Go module:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mkdir k8s-controller
cd k8s-controller
go mod init k8s-controller


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then you add the dependencies&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

go get k8s.io/apimachinery@v0.22.0 k8s.io/client-go@v0.22.0 sigs.k8s.io/controller-runtime@v0.9.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;next, we will write a file main.go and add this command.&lt;br&gt;
&lt;a href="https://github.com/A-LPHARM/kubernetes/blob/main/cluster-resource-definition/k8s-controller/main.go" rel="noopener noreferrer"&gt;main.go &lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

package main

import (
 "context"
 "flag"
 "log"
 "os"
 "os/signal"
 "path/filepath"
 "syscall"

 metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
 "k8s.io/apimachinery/pkg/runtime"
 "k8s.io/apimachinery/pkg/runtime/schema"
 "k8s.io/apimachinery/pkg/watch"
 "k8s.io/client-go/dynamic"
 "k8s.io/client-go/rest"
 "k8s.io/client-go/tools/cache"
 "k8s.io/client-go/tools/clientcmd"
 "k8s.io/client-go/util/homedir"
)

func main() {
 var kubeconfig string

 if home := homedir.HomeDir(); home != "" {
  kubeconfig = filepath.Join(home, ".kube", "config")
 }

 // Allow the kubeconfig file to be specified via a flag
    flag.StringVar(&amp;amp;kubeconfig, "kubeconfig", kubeconfig, "absolute path to the kubeconfig file")
    flag.Parse()

 config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
 if err != nil {
  log.Println("Falling back to in-cluster config")
  config, err = rest.InClusterConfig()
  if err != nil {
   log.Fatalf("Failed to get in-cluster config: %v", err)
  }
 }

 dynClient, err := dynamic.NewForConfig(config)
 if err != nil {
  log.Fatalf("Failed to create dynamic client: %v", err)
 }

 pdfdoc := schema.GroupVersionResource{Group: "alpharm.henry.com", Version: "v1", Resource: "pdfdocs"}

 informer := cache.NewSharedIndexInformer(
  &amp;amp;cache.ListWatch{
   ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
    return dynClient.Resource(pdfdoc).Namespace("").List(context.Background(), options)
   },
   WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
    return dynClient.Resource(pdfdoc).Namespace("").Watch(context.Background(), options)
   },
  },
  &amp;amp;unstructured.Unstructured{},
  0,
  cache.Indexers{},
 )

 informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
  AddFunc: func(obj interface{}) {
   log.Println("Add event detected:", obj)
  },
  UpdateFunc: func(oldObj, newObj interface{}) {
   log.Println("Update event detected:", newObj)
  },
  DeleteFunc: func(obj interface{}) {
   log.Println("Delete event detected:", obj)
  },
 })

 stop := make(chan struct{})
 defer close(stop)

 go informer.Run(stop)

 if !cache.WaitForCacheSync(stop, informer.HasSynced) {
  log.Fatalf("Timeout waiting for cache sync")
 }

 log.Println("Custom Resource Controller started successfully")

 sigCh := make(chan os.Signal, 1)
 signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
 &amp;lt;-sigCh
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the file that was set up defines the path to your kubeconfig file and i set it at the default &lt;code&gt;/.kube/config&lt;/code&gt; but users can also specify the kubeconfig file path. this Go program sets up a basic kubernetes custom resource controller that listens for changes to my custom resource type pdfdocs. it uses the client-go library to interact with kubernetes, with the ability to use your local kubeconfig file, allowing for updates, add, delete of events for our custom resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerize the controller&lt;/strong&gt;&lt;br&gt;
we create a docker image from this Go program, create a file dockerfile &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Use an official Golang image to build the Go application
FROM golang:1.22.5 AS build

# Set the working directory inside the container
WORKDIR /app

# Copy the go.mod and go.sum files and download dependencies
COPY go.mod go.sum ./
RUN go mod download

# Copy the rest of the application source code
COPY . .

# Build the Go application
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o k8s-controller .

# Use a minimal base image for the final container
FROM alpine:3.14

# Copy the built Go binary from the builder stage
COPY --from=build /app/k8s-controller /usr/local/bin/k8s-controller

# Set the entrypoint to the Go application
ENTRYPOINT ["/usr/local/bin/k8s-controller"]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then build the image and push to your repository &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker build -t henriksin1/k8s-controller:v1 .

#then run 

docker push henriksin1/k8s-controller:v1



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;the docker repo: &lt;a href="https://hub.docker.com/repository/docker/henriksin1/k8s-controller" rel="noopener noreferrer"&gt;https://hub.docker.com/repository/docker/henriksin1/k8s-controller&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set-up your Role-Based Access control&lt;/strong&gt;&lt;br&gt;
firstly, create your service account that your controller will use&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-controller-sa
  namespace: default


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;deploy the service account &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 apply -f 1-serviceaccout.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;next, create your cluster role and clusterrolebinding for the necessary permissions and, a clusterrolebinding to bind the clusterrole to the service account&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: k8s-controller-role
rules:
- apiGroups: ["alpharm.henry.com"]
  resources: ["pdfdocs"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;deploy the clusterrole account&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 apply -f 2-clusterrole.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;ClusterRoleBinding:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-controller-binding
subjects:
- kind: ServiceAccount
  name: k8s-controller-sa
  namespace: default
roleRef:
  kind: ClusterRole
  name: k8s-controller-role
  apiGroup: rbac.authorization.k8s.io


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;deploy the clusterrolebinding resource&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 apply -f 3-clusterrolebinding.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create your deployment and specify the service account name for the image to use.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-controller
  labels:
    app: k8s-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8s-controller
  template:
    metadata:
      labels:
        app: k8s-controller
    spec:
      serviceAccountName: k8s-controller-sa
      containers:
      - name: k8s-controller
        image: henriksin1/k8s-controller:v1
        args: []


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;deploy the image &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 apply -f 4-deployment.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;once all resources are running confirm &lt;/p&gt;

&lt;p&gt;you can log the pods and check &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl logs pod/k8s-controller-675b96c777-cld42
2024/07/31 12:30:42 Falling back to in-cluster config
2024/07/31 12:30:42 Add event detected: &amp;amp;{map[apiVersion:alpharm.henry.com/v1 kind:pdfdoc metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"alpharm.henry.com/v1","kind":"pdfdoc","metadata":{"annotations":{},"name":"my-document","namespace":"default"},"spec":{"documentName":"build","text":"we are creating a document to test our first crd\nso this can be fun to try out\n"}}
] creationTimestamp:2024-07-31T09:44:20Z generation:1 managedFields:[map[apiVersion:alpharm.henry.com/v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:annotations:map[.:map[] f:kubectl.kubernetes.io/last-applied-configuration:map[]]] f:spec:map[.:map[] f:documentName:map[] f:text:map[]]] manager:kubectl-client-side-apply operation:Update time:2024-07-31T09:44:20Z]] name:my-document namespace:default resourceVersion:1315 uid:3fbbf363-aae8-4916-9be5-95b8311c3bdb] spec:map[documentName:build text:we are creating a document to test our first crd
so this can be fun to try out
]]}
2024/07/31 12:30:42 Custom Resource Controller started successfully


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;now lets modify our pdfdocs&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kubectl edit pd my-document
pdfdoc.alpharm.henry.com/my-document edited
[ec2-user@ip-172-31-20-185 k8s-controller]$ kubectl logs pod/k8s-controller-675b96c777-cld42
2024/07/31 12:30:42 Falling back to in-cluster config
2024/07/31 12:30:42 Add event detected: &amp;amp;{map[apiVersion:alpharm.henry.com/v1 kind:pdfdoc metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"alpharm.henry.com/v1","kind":"pdfdoc","metadata":{"annotations":{},"name":"my-document","namespace":"default"},"spec":{"documentName":"build","text":"we are creating a document to test our first crd\nso this can be fun to try out\n"}}
] creationTimestamp:2024-07-31T09:44:20Z generation:1 managedFields:[map[apiVersion:alpharm.henry.com/v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:annotations:map[.:map[] f:kubectl.kubernetes.io/last-applied-configuration:map[]]] f:spec:map[.:map[] f:documentName:map[] f:text:map[]]] manager:kubectl-client-side-apply operation:Update time:2024-07-31T09:44:20Z]] name:my-document namespace:default resourceVersion:1315 uid:3fbbf363-aae8-4916-9be5-95b8311c3bdb] spec:map[documentName:build text:we are creating a document to test our first crd
so this can be fun to try out
]]}
2024/07/31 12:30:42 Custom Resource Controller started successfully
2024/07/31 15:19:06 Update event detected: &amp;amp;{map[apiVersion:alpharm.henry.com/v1 kind:pdfdoc metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"alpharm.henry.com/v1","kind":"pdfdoc","metadata":{"annotations":{},"name":"my-document","namespace":"default"},"spec":{"documentName":"build","text":"we are creating a document to test our first crd\nso this can be fun to try out\n"}}
] creationTimestamp:2024-07-31T09:44:20Z generation:2 managedFields:[map[apiVersion:alpharm.henry.com/v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:annotations:map[.:map[] f:kubectl.kubernetes.io/last-applied-configuration:map[]]] f:spec:map[.:map[] f:documentName:map[]]] manager:kubectl-client-side-apply operation:Update time:2024-07-31T09:44:20Z] map[apiVersion:alpharm.henry.com/v1 fieldsType:FieldsV1 fieldsV1:map[f:spec:map[f:text:map[]]] manager:kubectl-edit operation:Update time:2024-07-31T15:19:06Z]] name:my-document namespace:default resourceVersion:29595 uid:3fbbf363-aae8-4916-9be5-95b8311c3bdb] spec:map[documentName:build text:this is a article about customresourcedefinition what do your think
so this can be fun to try out


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
   Custom Resource Definitions (CRDs) empower Kubernetes users to extend the platform's capabilities, enabling the management of custom application resources using Kubernetes' declarative API. By leveraging CRDs, you can create custom controllers and operators to automate complex workflows and integrate seamlessly with external tools and services. Whether you're managing databases, custom application configurations, or automated workflows, CRDs provide the flexibility and power to tailor Kubernetes to your specific needs.&lt;/p&gt;

&lt;p&gt;Feel free to experiment with CRDs and explore how they can simplify and enhance the management of your applications within Kubernetes.&lt;/p&gt;

&lt;p&gt;see you'all next time &lt;/p&gt;

&lt;p&gt;If you found this blog insightful and dive deeper into topics like AWS cloud, Kubernetes, and cloud native projects or anything related, check out my linkedin page: &lt;a href="https://www.linkedin.com/in/emeka-henry-uzowulu-38900088/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/emeka-henry-uzowulu-38900088/&lt;/a&gt;&lt;br&gt;
and also, my github: &lt;a href="https://github.com/A-LPHARM/" rel="noopener noreferrer"&gt;https://github.com/A-LPHARM/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;feel free to share your thoughts and ask any questions&lt;/p&gt;

&lt;p&gt;for references: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noopener noreferrer"&gt;Custom Resources | Kubernetes&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="noopener noreferrer"&gt;Controllers | Kubernetes&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Pod Disruption In Kubernetes</title>
      <dc:creator>A-LPHARM</dc:creator>
      <pubDate>Fri, 26 Jul 2024 23:40:37 +0000</pubDate>
      <link>https://dev.to/alpharm/pod-disruption-in-kubernetes-2cmb</link>
      <guid>https://dev.to/alpharm/pod-disruption-in-kubernetes-2cmb</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj99ehd5jkgv33rxox5l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj99ehd5jkgv33rxox5l.jpg" alt="Image description" width="800" height="412"&gt;&lt;/a&gt;&lt;br&gt;
In the world of Kubernetes, ensuring high availability of application during maintenance and upgrades can be a frustrating this has happened to me most times. As you scale and manage containerized applications, minimizing downtime and maintaining service continuity becomes a huge challenge especially when a simple human error can cause service disruption. This is where the concept of Pod Disruption Budgets (PDBs) comes into play, today we will dive deep into the fundamentals of Pod Disruption Budget explore how to implement them in your Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WHAT IS KUBERNETES POD DISRUPTION?&lt;/strong&gt;&lt;br&gt;
A Pod Disruption Budget is a Kubernetes resource or policy that helps you define the number of pod that must remain available during a disruption, whether you're dealing with voluntary disruptions like node maintenance or unvoluntary disruptions like node failures, PDBs provide a way to safeguard your application's stability and availability. they can be applied to a specific workload, such as deployment or Statefulset or a group of workloads using a label selector.&lt;/p&gt;

&lt;p&gt;there are two types of disruption?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;voluntary Disruption &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;involuntary Disruption&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Voluntary Disruption: voluntary disruption can happen when &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the application owner deletes the deployment managing the pods&lt;/li&gt;
&lt;li&gt;updating the deployment pods template which causes a restart.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Involuntary Disruptions: can happen when&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;there is a node failure&lt;/li&gt;
&lt;li&gt;the cluster admin deletes the nodes accidentally &lt;/li&gt;
&lt;li&gt;there is a kernel related problem&lt;/li&gt;
&lt;li&gt;there isn't enough resources left on a node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm78055ykp96hlub8m1dj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm78055ykp96hlub8m1dj.jpeg" alt="Image description" width="474" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HOW DO POD DISRUPTION BUDGETS WORK?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;what PDB's do is tells kubernetes of the desired state of the cluster which you have orchestrated and enforced. this means there must be a minimal number of pods replica that must remain at any given time. when a voluntary disruption occurs Kubernetes identifies the set of pods that are subject to PDB and restricts any further deletion or disruptions. then Kubernetes reschedules those pods using a strategy that doesn't clash with the PDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importance of Pod Disruption Budgets (PDBs)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Prevents Unnecessary Downtime: PDBs ensure application availability during disruptions by defining the maximum number of pods that can be taken down simultaneously, preventing complete service outages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensures Graceful Upgrades and Maintenance: During routine maintenance or cluster upgrades, PDBs control disruptions, ensuring that a minimum number of pods are always running to handle user requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhances Cluster Reliability through Controlled Disruptions: By specifying the number of pod disruptions an application can tolerate, PDBs help maintain reliability and resilience, crucial in large, dynamic environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improves Node Management: Node maintenance, such as patching and upgrades, can be managed effectively without impacting application availability, thanks to PDBs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrates with Cluster Autoscaler : PDBs work seamlessly with the Cluster Autoscaler, automatically adjusting the number of nodes while respecting PDB constraints to ensure application stability during scaling operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are three main fields in PDB: &lt;br&gt;
&lt;code&gt;.spec.selector&lt;/code&gt; denotes the set of pods to which the PDB applies.&lt;br&gt;
&lt;code&gt;.spec.minAvailable&lt;/code&gt; it also denotes the minimal number of pods that must be available after eviction.&lt;br&gt;
&lt;code&gt;.spec.maxUnavailable&lt;/code&gt; this denotes the number of pods that can be unavailable after eviction.&lt;/p&gt;

&lt;p&gt;Lets create our application using this manifest &lt;br&gt;
create a simple Mario demo app for your clients&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: apps/v1
kind: Deployment
metadata:
  name: games-deployment
spec:
  replicas: 8 # You can adjust the number of replicas as needed
  selector:
    matchLabels:
      app: games
  template:
    metadata:
      labels:
        app: games
    spec:
      containers:
      - name: mario-container
        image: sevenajay/mario:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: games-service
spec:
  type: LoadBalancer
  selector:
    app: games
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;deploy the application in your Kubernetes cluster with 8 replicas&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f games-deployment.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP                NODE                             NOMINATED NODE   READINESS GATES
games-deployment-68bcccb748-4gdp4   1/1     Running   0          35m   192.168.126.120   ip-192-168-106-62.ec2.internal   &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;
games-deployment-68bcccb748-4hkps   1/1     Running   0          35m   192.168.72.216    ip-192-168-67-92.ec2.internal    &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;
games-deployment-68bcccb748-fm48d   1/1     Running   0          35m   192.168.96.48     ip-192-168-106-62.ec2.internal   &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;
games-deployment-68bcccb748-hsp4f   1/1     Running   0          35m   192.168.94.157    ip-192-168-67-92.ec2.internal    &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;
games-deployment-68bcccb748-k26q6   1/1     Running   0          35m   192.168.83.27     ip-192-168-67-92.ec2.internal    &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;
games-deployment-68bcccb748-kksnw   1/1     Running   0          35m   192.168.67.154    ip-192-168-67-92.ec2.internal    &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;
games-deployment-68bcccb748-lh95r   1/1     Running   0          35m   192.168.107.174   ip-192-168-106-62.ec2.internal   &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;
games-deployment-68bcccb748-tckkn   1/1     Running   0          35m   192.168.108.223   ip-192-168-106-62.ec2.internal   &amp;lt;none&amp;gt;     
      &amp;lt;none&amp;gt;s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we can get the number of nodes available&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes -o wide
NAME                             STATUS   ROLES    AGE    VERSION               INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-106-62.ec2.internal   Ready    &amp;lt;none&amp;gt;   149m   v1.30.0-eks-036c24b   192.168.106.62   &amp;lt;none&amp;gt;        Amazon Linux 2023.5.20240701   6.1.94-99.176.amzn2023.x86_64   containerd://1.7.11
ip-192-168-67-92.ec2.internal    Ready    &amp;lt;none&amp;gt;   149m   v1.30.0-eks-036c24b   192.168.67.92    &amp;lt;none&amp;gt;        Amazon Linux 2023.5.20240701   6.1.94-99.176.amzn2023.x86_64   containerd://1.7.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;now the application is accessible&lt;br&gt;
 &lt;br&gt;
&lt;strong&gt;USE-CASE:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;now let's create our manifest file for the PDB and set the &lt;code&gt;minAvailable&lt;/code&gt; size to 4 which means that out of 8 replicas 4 must be running even if there is a voluntary or involuntary disruption of the pods, we can also use &lt;code&gt;maxUnavailable&lt;/code&gt; as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: games-pdb
spec:
  minAvailable: 4
  selector:
    matchLabels:
      app: games
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can apply the pod disruption budget &lt;br&gt;
&lt;code&gt;kubectl apply -f "pdb.yaml"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Testing the PDB&lt;/p&gt;

&lt;p&gt;let's test the PDB and create a Disruption, by draining the node and we monitor the pdb in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl drain &amp;lt;node-name&amp;gt; --ignore-daemonsets --delete-emptydir-data
kubectl drain ip-192-168-106-62.ec2.internal --ignore-daemonsets --delete-emptydir-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node/ip-192-168-106-62.ec2.internal cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/aws-node-qg5kc, kube-system/kube-proxy-v2n2m
evicting pod default/games-deployment-68bcccb748-tckkn
evicting pod default/games-deployment-68bcccb748-fm48d
evicting pod default/games-deployment-68bcccb748-4gdp4
evicting pod default/games-deployment-68bcccb748-lh95r
pod/games-deployment-68bcccb748-fm48d evicted
pod/games-deployment-68bcccb748-4gdp4 evicted
pod/games-deployment-68bcccb748-lh95r evicted
pod/games-deployment-68bcccb748-tckkn evicted
node/ip-192-168-106-62.ec2.internal drained
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then we drain the second node&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl drain ip-192-168-67-92.ec2.internal --ignore-daemonsets --delete-emptydir-data
node/ip-192-168-67-92.ec2.internal cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/aws-node-xp987, kube-system/kube-proxy-nww45
evicting pod default/games-deployment-68bcccb748-hsp4f
evicting pod default/games-deployment-68bcccb748-4n4zl
evicting pod kube-system/coredns-586b798467-p5pg9
evicting pod default/games-deployment-68bcccb748-kd7fb
evicting pod default/games-deployment-68bcccb748-k26q6
evicting pod default/games-deployment-68bcccb748-r7s57
evicting pod kube-system/coredns-586b798467-mpgmq
evicting pod default/games-deployment-68bcccb748-kksnw
evicting pod default/games-deployment-68bcccb748-5v6bn
evicting pod default/games-deployment-68bcccb748-4hkps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Monitor the progress and observe that the eviction API will retry to evict the pods until they can be rescheduled on another node. &lt;/p&gt;

&lt;p&gt;The nodes won't be terminated due to the PDB contraints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes -o wide
NAME                             STATUS                     ROLES    AGE     VERSION               INTERNAL-IP      EXTERNAL-IP   OS-IMAGE
     KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-106-62.ec2.internal   Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   6h20m   v1.30.0-eks-036c24b   192.168.106.62   &amp;lt;none&amp;gt;        Amazon Linux 2023.5.20240701   6.1.94-99.176.amzn2023.x86_64   containerd://1.7.11
ip-192-168-67-92.ec2.internal    Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   6h20m   v1.30.0-eks-036c24b   192.168.67.92    &amp;lt;none&amp;gt;        Amazon Linux 2023.5.20240701   6.1.94-99.176.amzn2023.x86_64   containerd://1.7.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;then the pods &lt;code&gt;status&lt;/code&gt; is left on pending cause the nodes aren't running as expected. we observed the minimum available pods in pdb are 4 pods&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE                            NOMINATED NODE   READINESS GATES  
games-deployment-68bcccb748-2dvsq   0/1     Pending   0          60s     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;                          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
games-deployment-68bcccb748-5v6bn   1/1     Running   0          98s     192.168.84.6     ip-192-168-67-92.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
games-deployment-68bcccb748-hjx6c   0/1     Pending   0          59s     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;                          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
games-deployment-68bcccb748-j9gfs   0/1     Pending   0          59s     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;                          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
games-deployment-68bcccb748-k26q6   1/1     Running   0          3h45m   192.168.83.27    ip-192-168-67-92.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
games-deployment-68bcccb748-kd7fb   1/1     Running   0          99s     192.168.64.174   ip-192-168-67-92.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
games-deployment-68bcccb748-kksnw   1/1     Running   0          3h45m   192.168.67.154   ip-192-168-67-92.ec2.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
games-deployment-68bcccb748-s777t   0/1     Pending   0          60s     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;                          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now we can uncordon the nodes to return the nodes back and make the pods schedulable. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT THINGS TO NOTE WHEN USING PDB's&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;there are some factors to consider when creating pdb's &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mointoring the Pod Disruption Status &lt;/li&gt;
&lt;li&gt;when using the Pdb policy in a single deployments it blocks the execution of the  &lt;code&gt;kubectl drain&lt;/code&gt; command so avoid using pdb in single deployment
using &lt;code&gt;matchlabels&lt;/code&gt;  you need to be clear to avoid overlapping selectors when deploying multiple&lt;/li&gt;
&lt;li&gt;don't set minAvailable to 100% you cant upgrade your cluster.&lt;/li&gt;
&lt;li&gt;if your set maxUnavailable to 0% means you can not drain your node successfully.&lt;/li&gt;
&lt;li&gt;You can set the &lt;code&gt;.spec.unhealthyPodEvictionPolicy&lt;/code&gt; field to &lt;code&gt;AlwaysAllow&lt;/code&gt; and enable Kubernetes to evict unhealthy nodes when it drains a node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Pod Disruption Budgets is a powerful resource in kubernetes which is critical for workloads to remain available during node maintenance or failures and minimizes the disruption of operations. there by ensuring that applications remain resilient and continue to serve users effectively.&lt;br&gt;
see you all next time!.&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
