<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ajit Vedpathak</title>
    <description>The latest articles on DEV Community by Ajit Vedpathak (@ajitvedpathak).</description>
    <link>https://dev.to/ajitvedpathak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ajitvedpathak"/>
    <language>en</language>
    <item>
      <title>Kubernetes Cluster Over-Provisioning: Proactive App Scaling</title>
      <dc:creator>Ajit Vedpathak</dc:creator>
      <pubDate>Mon, 02 Nov 2020 19:05:46 +0000</pubDate>
      <link>https://dev.to/ajitvedpathak/kubernetes-cluster-over-provisioning-proactive-app-scaling-1jl0</link>
      <guid>https://dev.to/ajitvedpathak/kubernetes-cluster-over-provisioning-proactive-app-scaling-1jl0</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes Cluster Over-Provisioning: Proactive App Scaling
&lt;/h1&gt;

&lt;p&gt;Scalability is a desirable attribute of a system or process. Poor scalability can result in poor system performance. Kubernetes is very much flexible, dynamic but when it comes to scaling, we encountered few challenges in application pod scaling which ended up having an adverse impact on application performance. In this blog, we will walk through the steps to overprovision a k8s cluster for scaling and failover.&lt;/p&gt;

&lt;h1&gt;
  
  
  Need of a cluster overprovisioning
&lt;/h1&gt;

&lt;p&gt;Let's say we have a Kubernetes cluster running with X number of worker-nodes. All nodes in the cluster are running at their full capacity, meaning there is no space left on any of the nodes for incoming pods. Everything till now is working fine, applications running on the Kubernetes cluster are able to respond in time(&lt;strong&gt;We are scaling applications on the basis of Memory/CPU usage&lt;/strong&gt;) but suddenly load/traffic on the application increases. As the load on the application increases, Memory/CPU consumption of an application also increases and Kubernetes starts scaling the application horizontally by adding new pods in the k8s cluster when the metric considered for scaling crosses a threshold value. But, all the newly created pods will go in a &lt;code&gt;pending&lt;/code&gt; state (as all our worker nodes are running at full capacity).&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noopener noreferrer"&gt;Horizontal Pod Autoscaler&lt;/a&gt; creates additional pod when the need for them arises. But what happens when all the nodes in the cluster are at full capacity and can’t run any more pods?&lt;br&gt;
The &lt;a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#basics" rel="noopener noreferrer"&gt;Cluster Autoscaler&lt;/a&gt; takes care of automatically provisioning additional nodes when it notices a pod that can’t be scheduled to existing nodes because of the lack of resources on those nodes. A new node will be provisioned if, after a new pod is created and the Scheduler can’t schedule it to any of the existing nodes. The Cluster Autoscaler looks for such pods and signals the cloud provider (&lt;strong&gt;eg. AWS&lt;/strong&gt;) to spin up an additional node and &lt;strong&gt;the problem lies in here&lt;/strong&gt;. &lt;code&gt;To provision, a new node in a cluster cloud provider may take some time (a minute or more) before the created nodes appear in Kubernetes cluster.&lt;/code&gt; It almost entirely depends on the cloud provider &lt;strong&gt;(We are using AWS)&lt;/strong&gt; and the speed of node provisioning. &lt;strong&gt;It may take some time till the new pods can be scheduled.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Any pod that is in a pending state will have to wait till the time node gets added to the cluster. This is not an ideal situation for the applications which need quick scaling as the amount of request/load increases, as any delays in the scaling may hinder the application performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  To overcome the problem, we thought of implementing the below solutions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Fix the number of extra nodes in the cluster&lt;/li&gt;
&lt;li&gt;Make use of &lt;a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="noopener noreferrer"&gt;pod-priority-preemption&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Fix the number of extra nodes in the cluster
&lt;/h3&gt;

&lt;p&gt;One of the ideas was to add a fixed number of extra nodes in the cluster that will always be available, waiting, and ready to accept new pods. In this way, our apps would always be able to scale up without having to wait for AWS to create a new EC2 instance to join the Kubernetes cluster. The apps could always scale up without getting in a pending state. That was exactly what we wanted.&lt;br&gt;
But, this was a temporary solution that was not cost-effective and efficient. Also overprovisioning was not dynamic, meaning the number of fixed extra nodes would not change as the cluster grows or shrinks and we started facing the same issue again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make use of &lt;a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="noopener noreferrer"&gt;pod-priority-preemption&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The second solution was difficult to implement as it was difficult to decide the pod priority of different applications as we were having multiple applications that are sensitive to such scaling delays. Any delay in the scaling of an application may hinder the application performance and will have an adverse impact on application performance.&lt;/p&gt;

&lt;p&gt;As we were in search of excellence, we came across a tool that would make use of cluster autoscaler to overscale the cluster.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction to Horizontal Cluster Proportional Autoscaler
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwsv5cufj9m3t1kade3rb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwsv5cufj9m3t1kade3rb.png" alt="Placeholder pods"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Horizontal cluster proportional autoscaler image watches over the number of schedulable nodes and cores of the cluster and resizes the number of replicas for the required resource.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If we want to configure dynamic overprovisioning of a cluster (e.g. 20% of resources in the cluster) then we need to use &lt;a href="https://github.com/kubernetes-sigs/cluster-proportional-autoscaler" rel="noopener noreferrer"&gt;Horizontal Cluster Proportional Autoscaler&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It gives a simple control loop that watches the cluster size and scales the target controller. To overscale the cluster, we will create pods that will occupy space in the cluster and will do nothing. We can name these pods as &lt;strong&gt;placeholder pods&lt;/strong&gt; (We will be using paused pod replica-controller as the target controller). Having these pods occupying extra space in the cluster will allow us to have some extra resources already created and ready to be used at any time.&lt;/p&gt;

&lt;p&gt;But how we are going to use this extra space? This is where Kubernetes will help us with &lt;a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="noopener noreferrer"&gt;&lt;strong&gt;pod-priority-preemption&lt;/strong&gt;&lt;/a&gt;.The placeholder pods(paused pods) will have the lowest priority assigned to them than the actual workload. Since these pods have a lower priority than regular pods, Kubernetes will compare the priority of all pods and evict those with lower priority as soon as resources become scarce. The &lt;strong&gt;placeholder pods&lt;/strong&gt; then go into a pending state on which cluster-autoscaler reacts by adding new nodes in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Cluster Proportional Autoscaler increases the number of replicas of placeholder pods when cluster grows and decreases the number of replicas if cluster shrinks.&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Scaling with over-provisioning: what happens under the hood
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyz6umfzyyndkekqqmiwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyz6umfzyyndkekqqmiwy.png" alt="Cluster Proportional Autoscaler"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Load hits the cluster&lt;/li&gt;
&lt;li&gt;Kubernetes starts scaling application pods horizontally by adding new pods.&lt;/li&gt;
&lt;li&gt;Kube-scheduler tries to place newly created application pods but finds insufficient resources.&lt;/li&gt;
&lt;li&gt;placeholder-pods (pause pods in our case) gets evicted as it has low priority and application pods get placed.&lt;/li&gt;
&lt;li&gt;Application pods gets placed and scaling happens immediately without any delay.&lt;/li&gt;
&lt;li&gt;placeholder-pods go in pending state and cannot be scheduled due to insufficient resources.&lt;/li&gt;
&lt;li&gt;Cluster autoscaler watches the pending pods and will scale the cluster by adding new nodes in the cluster&lt;/li&gt;
&lt;li&gt;Kube-scheduler waits, for instance, to be provisioned, boot, join the cluster, and become ready.&lt;/li&gt;
&lt;li&gt;Kube-scheduler notices there is a new node in the cluster where pods can be placed and will schedule placeholder-pods on such nodes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Implementation
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;Note:- Change pod priority cut off in Cluster Autoscaler to -10 so pause pods are considered during scale down and scale-up. Set flag **expendable-pods-priority-cut-off to -10**.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/avedpathak/Overprovisioner.git" rel="noopener noreferrer"&gt;&lt;strong&gt;This helm chart&lt;/strong&gt;&lt;/a&gt; helps you to deploy the placeholder-pods (which will occupy extra space in the cluster) and cluster proportional autoscaler deployment (for dynamic overprovisioning).&lt;/p&gt;

&lt;p&gt;List of Kubernetes resources that gets created with &lt;a href="https://github.com/avedpathak/Overprovisioner.git" rel="noopener noreferrer"&gt;&lt;strong&gt;helm chart&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Placeholder-pod (paused-pod) deployment&lt;/li&gt;
&lt;li&gt;Cluster proportional autoscaler deployment&lt;/li&gt;
&lt;li&gt;Serviceaccount&lt;/li&gt;
&lt;li&gt;Clusterrole&lt;/li&gt;
&lt;li&gt;Clusterrolebinding&lt;/li&gt;
&lt;li&gt;Configmap&lt;/li&gt;
&lt;li&gt;Priority class for the paused pods with a priority value of -1&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;a href="https://github.com/avedpathak/Overprovisioner.git" rel="noopener noreferrer"&gt;&lt;strong&gt;helm chart&lt;/strong&gt;&lt;/a&gt; uses the following configuration to overscale the cluster. &lt;br&gt;
&lt;strong&gt;1 replica of placeholder-pod per node&lt;/strong&gt; (with 2 core CPU and 2 Gi of memory request). For example, if the Kubernetes cluster is running with 15 nodes then 15 replicas of placeholder-pod will be there in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/cluster-proportional-autoscaler#horizontal-cluster-proportional-autoscaler-container" rel="noopener noreferrer"&gt;&lt;strong&gt;This link will help you know different Control patterns and ConfigMap formats&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Cluster-Overprovisioning is needed when we have applications running in the cluster which are sensitive to scaling delays and don't want to wait for new nodes to be created and join the cluster, for scaling.&lt;/p&gt;

&lt;p&gt;With Cluster-Overprovisioning implemented, it takes only a few seconds to scale the application horizontally, maintaining the performance of the application as per increased usage or traffic or load.&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/cluster-proportional-autoscaler" rel="noopener noreferrer"&gt;https://github.com/kubernetes-sigs/cluster-proportional-autoscaler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/cluster-proportional-autoscaler#horizontal-cluster-proportional-autoscaler-container" rel="noopener noreferrer"&gt;https://github.com/kubernetes-sigs/cluster-proportional-autoscaler#horizontal-cluster-proportional-autoscaler-container&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler" rel="noopener noreferrer"&gt;https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noopener noreferrer"&gt;https://www.ianlewis.org/en/almighty-pause-container&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>eks</category>
    </item>
    <item>
      <title>User access control in k8s - X.509 Client Certificate approach</title>
      <dc:creator>Ajit Vedpathak</dc:creator>
      <pubDate>Sun, 19 Jul 2020 15:07:34 +0000</pubDate>
      <link>https://dev.to/ajitvedpathak/user-authentication-and-authorization-in-kubernets-access-control-in-kubernetes-3ef1</link>
      <guid>https://dev.to/ajitvedpathak/user-authentication-and-authorization-in-kubernets-access-control-in-kubernetes-3ef1</guid>
      <description>&lt;p&gt;One area of a Kubernetes that is critical to production deployments is security. Ensuring the control of who has access to your Information System and what users have access to is the objective of an &lt;code&gt;Identity and Access management&lt;/code&gt; system. It is one of the fundamental processes in Security Management and it should be thoroughly taken care of.&lt;/p&gt;

&lt;p&gt;This blog will take a practical look at &lt;code&gt;authentication and authorization of users external to Kubernetes&lt;/code&gt; with &lt;code&gt;Role-Based Access Control (RBAC)&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;High-level understanding of Kubernetes concepts. Sample reference article.&lt;/li&gt;
&lt;li&gt;You have a Kubernetes cluster running.&lt;/li&gt;
&lt;li&gt;You have the kubectl command-line (kubectl CLI) installed.&lt;/li&gt;
&lt;li&gt;OpenSSL installed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why we need RBAC?
&lt;/h2&gt;

&lt;p&gt;RBAC policies are crucial for the management of a k8s cluster, as with RBAC we can specify which types of actions are allowed depending on the user and the role in the organization. Examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secure your cluster by granting privileged operations (Example- accessing secrets) only to admin users.&lt;/li&gt;
&lt;li&gt;User authentication in your cluster.&lt;/li&gt;
&lt;li&gt;Limit resource creation (Example- pods, deployments) to specific namespaces.&lt;/li&gt;
&lt;li&gt;Isolate resources access within your organization (for example, between departments).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  To manage RBAC in Kubernetes, we need the following elements:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Roles and ClusterRoles: Both consist of rules. The difference between a Role and a ClusterRole in the scope: in a Role, the rules are applicable to a single namespace, whereas a ClusterRole is cluster-wide.Both Roles and ClusterRoles are mapped as API Resources inside our cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RoleBindings and ClusterRoleBindings: Bind subjects to roles (i.e. the operations a given user can perform). As for Roles and ClusterRoles, the difference is the scope: a RoleBinding effective inside a namespace, whereas a ClusterRoleBinding is cluster-wide.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Subjects: The set of users and processes that want to access the Kubernetes API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resources: The set of Kubernetes API Objects available in the cluster. ( Pods, Deployments, Services, etc. )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verbs: The set of operations that can be executed to the resources above (examples: get, watch, create, delete, etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Users: These are meant for humans or processes living outside the cluster. In Kubernetes, there is no API call to add/create the users(for example you &lt;code&gt;can not say kubectl create user&lt;/code&gt; )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service accounts: These are namespaced and associated with the pods via secret. These are managed by Kubernetes. By default service account have no access permissions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With these elements in mind, We want to connect subjects, API resources, and operations. In other words, we want to specify, for a given user, which operations can be executed over a set of resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use RBAC?
&lt;/h2&gt;

&lt;p&gt;There are multiple Authentication strategies followed by Kubernetes. We will be using &lt;code&gt;X.509 Client Certificate&lt;/code&gt; for authentication purpose.&lt;/p&gt;

&lt;p&gt;When we use X.509 Client Certificate Authentication strategy Kubernetes it will first create the certificate authority and it is a cluster-wide authority that is issuing certificates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First step k8s will create a certificate authority and it will generate a certificate and that is going be a crucial component of authentication.&lt;/li&gt;
&lt;li&gt;Once you have a CA Cert then you can basically create a certificate external to k8s and send it to k8s saying here is a new user that is going to be associated with the new certificate please sign it and approve it.&lt;/li&gt;
&lt;li&gt;Then Kubernetes will take that and will add it to its own internal store and then it will authorize and approve.&lt;/li&gt;
&lt;li&gt;Then Kubernetes generates a signed certificate and gives it back, which is going to be used by the user/admin who is going to access the k8s cluster.&lt;/li&gt;
&lt;li&gt;The user will use a combination of the CA cert as certificate along with the pre-approved certificate associated with the user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will create an external user called Bob and the group engineering that he belongs to with the roles which grant him namespace level write access and a cluster-level read access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8UjgPLrf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h2uksaoxp1766bygb3h1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8UjgPLrf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h2uksaoxp1766bygb3h1.PNG" alt="Authentication And Authorization In Kubernetes" width="880" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Steps: (Before we start please make sure all of the above pre-requisites are satisfied)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create a namespace &lt;code&gt;engineering&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace engineering
namespace/engineering created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run a below command to get namespace
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get ns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create Private Key Using OpenSSL
&lt;/h3&gt;

&lt;p&gt;After creating a namespace we will create a private key using OpenSSL. Before that create a new directory called cred and cd into that directory. To do that run below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir cred &amp;amp;&amp;amp; cd cred
$ openssl genrsa -out Bob.key 2048
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Extract certificate signing request from the private key created
&lt;/h3&gt;

&lt;p&gt;Now we have created a private key with name Bob.key we will extract the CSR (certificate signing request) from this private key for that run below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ openssl req -new -key Bob.key -out Bob.csr

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields, there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:.
State or Province Name (full name) [Some-State]:.
Locality Name (eg, city) []:.
Organization Name (eg, company) [Internet Widgits Pty Ltd]:engineering
Organizational Unit Name (eg, section) []:.
Common Name (e.g. server FQDN or YOUR name) []:Bob
Email Address []:.

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:.
An optional company name []:.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;while creating a CSR you will be asked to enter certificate subject values. (Enter a value for &lt;code&gt;Common Name&lt;/code&gt; and &lt;code&gt;Organization Name&lt;/code&gt;. For all other fields enter the value as ' . ' and it will be treated as blank value)&lt;/p&gt;

&lt;p&gt;The common name we are using here is &lt;code&gt;Bob&lt;/code&gt; and the organization name &lt;code&gt;engineering&lt;/code&gt; and this very critical to Kubernetes because when we are actually extracting the CSR file from the key we are mentioning that the name of the user is Bob and the group that he belongs to is engineering. &lt;/p&gt;

&lt;p&gt;This is the only common identifier between the outside user and Kubernetes because when Kubernetes wants to find out more about Bob it actually looks at the common name in the certificate and then figures out the exact identity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Convert the CSR file to base64
&lt;/h3&gt;

&lt;p&gt;Now we have two files one is the private key that we have generated and the CSR file that we got out of it&lt;br&gt;
This CSR file is basically the certificate request. We have to send this to Kubernetes and ask Kubernetes to register it and then as a cluster administrator will approve that request.&lt;/p&gt;

&lt;p&gt;Before that, we have to convert the CSR file to base64 as base64 string is how Kubernetes will understand. For that run below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat Bob.csr | base64 | tr -d '\n'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the output of the above command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create CertificateSigningRequest
&lt;/h3&gt;

&lt;p&gt;Now create a certificateSigningRequest.yaml and &lt;code&gt;paste the copied output string of above command next to request: under spec:&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CertificateSigningRequest
kind: CertificateSigningRequest
apiVersion: certificates.k8s.io/v1beta1
metadata:
  name: Bob
spec:
  groups:
  - system:authenticated
  request: &amp;lt;paste base64 string from above step&amp;gt;
  usages:
  - digital signature
  - key encipherment
  - server auh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Send certificate signing request to Kubernetes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f certificateSigningRequest.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Get the status of a CSR created
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get csr
NAME      AGE   REQUESTOR            CONDITION
Bob   7s    docker-for-desktop   Pending
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see the &lt;code&gt;condition is pending&lt;/code&gt; because the Kubernetes administrator has to approve the signing request. To do that run the below command &lt;/p&gt;

&lt;h3&gt;
  
  
  Approve CertificateSigningRequest
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#kubectl certificate approve &amp;lt;name of the certificateSigningRequest &amp;gt;

$ kubectl certificate approve Bob
certificatesigningrequest.certificates.k8s.io/Bob approved
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Get the status of a CSR
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get csr
NAME      AGE     REQUESTOR            CONDITION
Bob   3m21s   docker-for-desktop   Approved,Issued
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have sent the CSR and Kubernetes admin approved it. Kubernetes will turn the CSR request into a base64 encoded token and store it internally. Now we need to retrieve that token, decode the token and create a CRT file from that token which will be the final certificate which we are going to use when Bob gets authenticated. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create a CRT file from a token
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get csr Bob -o jsonpath='{.status.certificate}' | base64 --decode &amp;gt; Bob.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have two files a) Bob.key b) Bob.crt combination of both these files will help Bob to play with the Kubernetes cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure The details in a config file (kubeconfig file)
&lt;/h3&gt;

&lt;p&gt;We have to set the credentials in the kubeconfig file to get the Kube cluster access for that run a below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl config set-credentials Bob --client-certificate=Bob.crt --client-key=Bob.key
User "Bob" set.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test if Bob can access the resources
&lt;/h3&gt;

&lt;p&gt;So now to find out our Bob is ready to manage and access the resources in namespace engineering&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl auth can-i list pods -n engineering
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if you run the above command you will get the output as &lt;code&gt;yes&lt;/code&gt; because you are running a command as &lt;code&gt;cluster-admin&lt;/code&gt; so to check is the Bob is able to list the pods inside lab4 namespace run below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl auth can-i list pods -n engineering --as Bob
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output of the above command will be &lt;code&gt;no&lt;/code&gt;. This is because the Bob has been just authenticated by Kubernetes but it is not authorized to perform any action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authorization
&lt;/h2&gt;

&lt;p&gt;Once an API request is authenticated, the next step is to determine whether the operation is allowed or not. This is done in the second stage of the access control pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a pod in namespace engineering
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl run nginx --image=nginx -n engineering    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test if Bob can access pod inside engineering namespace
&lt;/h3&gt;

&lt;p&gt;To See if Bob can access the pod inside engineering namespace run a below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n engineering --as Bob
Error from server (Forbidden): pods is forbidden: User "Bob" cannot list resource "pods" in API group "" in the namespace "engineering"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see the Bob is not able to access the pods as he is not authorized to even list the pods inside engineering namespace for that we have to &lt;code&gt;create a role&lt;/code&gt; which will give them access to resources mentioned in the rule and bind that role to a user i.e. Bob.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a role for engineering namespace
&lt;/h3&gt;

&lt;p&gt;So how does the role lookalike? Create a role.yaml file and paste the below content in the file and run a command to create a role&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: engineering
  name: reader
rules:
- apiGroups: [""]
  resources: ["pods","services","nodes"]
  verbs: ["get","watch","list"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f role.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Get role
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get roles -n engineering
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We just have created a role that is not associated with any user and it grants access to resources pods, services, and nodes to watch and list operations inside engineering namespace.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create role-binding
&lt;/h3&gt;

&lt;p&gt;Now we have to associate this created role to a user we have created. To do so create a rolebinding.yaml paste the below text and run a command to create role-binding.&lt;br&gt;
RoleBinding associates an existing role with Bob.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-access
  namespace: engineering # namespace where we have created the role and user
subjects:
- kind: User
  name: Bob # user name we have created
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: reader #name of the role we have created
  apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f rolebinding.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Get rolebindings
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get rolebindings -n engineering
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test if Bob can list pod inside engineering namespace
&lt;/h3&gt;

&lt;p&gt;Now try to run the below command and see if Bob have access to get the pods inside engineering namespaces&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n engineering --as Bob
NAME   READY   STATUS    RESTARTS   AGE
engineering   1/1     Running   0          6s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see now Bob has access to get the pods from the namespace. Now If a Bob wants to know the nodes in the cluster what he does is he will run the command as &lt;code&gt;$ kubectl get nodes --as Bob&lt;/code&gt; but he Bob will not be able to access the nodes as nodes are the cluster level resource and the object role is limited to namespace only. &lt;code&gt;Current permission which is very restrictive prohibits Bob from accessing anything at cluster-level&lt;/code&gt;. The Bob still pretty much aligned to namespace engineering and he can list any resource that is specific to the namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes --as Bob
Error from server (Forbidden): nodes is forbidden: User "Bob" cannot list resource "nodes" in API group "" at the cluster scope
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create cluster-level premissions
&lt;/h3&gt;

&lt;p&gt;Now we want to give Bob slightly better access permissions i.e. cluster-level permissions for that we have to create a new role called ClusterRole which will give access to cluster-level resources. To do so create a file &lt;/p&gt;

&lt;p&gt;clusterRole.yaml and paste the below content in the file and run the command to create a cluster role.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get ClusterRole
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # namespace is removed as cluster level role is not namespaced
  name: cluster-node-reader
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get","watch","list"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f clusterRole.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create role-binding
&lt;/h3&gt;

&lt;p&gt;Now to bind this role to a user create clusterbinding.yaml and paste the below content in the file and run the command to create a cluster binding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-binding 
subjects:
- kind: User
  name: Bob
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-node-reader
  apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f clusterbinding.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test if Bob can cluster-level resource
&lt;/h3&gt;

&lt;p&gt;To see is Bob can now access the cluster level resource run a below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes --as Bob
NAME STATUS ROLES AGE VERSION
docker-desktop Ready master 90d v1.14.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
  </channel>
</rss>
