<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: mwaghadhare</title>
    <description>The latest articles on DEV Community by mwaghadhare (@mwaghadhare).</description>
    <link>https://dev.to/mwaghadhare</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mwaghadhare"/>
    <language>en</language>
    <item>
      <title>CKA Exam Preparation guide</title>
      <dc:creator>mwaghadhare</dc:creator>
      <pubDate>Sat, 15 Aug 2020 15:18:31 +0000</pubDate>
      <link>https://dev.to/mwaghadhare/cka-exam-preparation-guide-4ig7</link>
      <guid>https://dev.to/mwaghadhare/cka-exam-preparation-guide-4ig7</guid>
      <description>&lt;p&gt;This prep exam will help prepare you for the type of questions you will encounter, as well as the timed environment of the actual exam.&lt;/p&gt;

&lt;p&gt;If you are studying for the Certified Kubernetes Administrator (CKA) exam.&lt;/p&gt;

&lt;p&gt;The questions on this exam focus on the CKA curriculum categories listed below:&lt;/p&gt;

&lt;p&gt;CKA Exam Pattern and Tips:&lt;br&gt;
CKA requires you to solve 24 questions in 3 hours.&lt;br&gt;
CKA exam curriculum includes these general domains and their weights on the exam:&lt;br&gt;
Application Lifecycle Management – 8%&lt;br&gt;
Installation, Configuration &amp;amp; Validation – 12%&lt;br&gt;
Core Concepts – 19%&lt;br&gt;
Networking – 11%&lt;br&gt;
Scheduling – 5%&lt;br&gt;
Security – 12%&lt;br&gt;
Cluster Maintenance – 11%&lt;br&gt;
Logging / Monitoring – 5%&lt;br&gt;
Storage – 7%&lt;br&gt;
Troubleshooting – 10%&lt;/p&gt;

&lt;p&gt;Each domain has the same level of emphasis as it will on the official exam.&lt;br&gt;
Exam questions can be attempted in any order and doesn’t have to be sequential.&lt;br&gt;
Each exam question carries a weight so be sure you attempt the exams with higher weights before focusing on the lower ones. So target the ones with higher weights and quicker solutions like debugging ones.&lt;br&gt;
6-8 different K8s clusters are provisioned. Each question refers to a different kubernetes cluster, and the context needs to be switched. Be sure to execute the kubectl use context command, which is available with every question and you just need to copy paste it.&lt;br&gt;
Check for the namespace mentioned in the question, to find resources and create resources. Use the -n &lt;br&gt;
You would be performing most of the interaction from base node. However, pay attention to check for the node you need to execute the exams and make sure you return back to the base node.&lt;br&gt;
SSH to nodes and gaining root access is allowed, if needed. Commands are provided. Make sure you use the sudo -i command for running docker commands.&lt;br&gt;
Read carefully the Information provided within the questions with the i mark. They would provide very useful hints in addressing the question and save time. for e.g. namespaces to look into. for a failed pod, what has already been created like configmap, secrets, network policies so that you do not create the same.&lt;br&gt;
CKA was already upgraded to use k8s 1.18 version and kubectl run commands did not work for me. Use kubectl create commands to create deployments.&lt;br&gt;
Make sure you know the imperative commands to create resources, as you won’t have time to time to create and edit yaml files.&lt;br&gt;
If you need to edit further use --dry-run -o yaml to get a headstart with spec yaml file and edit the same.&lt;br&gt;
I personally use alias kk=kubectl to avoid typing kubectl&lt;/p&gt;

&lt;p&gt;CKA Key Topics&lt;br&gt;
Application Lifecycle Management&lt;br&gt;
Understand deployments and how to perform rolling update and rollbacks. Practice kubectl rollout commands to check status and undo deployments.&lt;br&gt;
Know how to scale and create self-healing applications using replicas&lt;br&gt;
Understand Init Containers and usage&lt;br&gt;
Installation, Configuration &amp;amp; Validation&lt;br&gt;
Practice creating kubernetes Cluster using Kubeadm&lt;br&gt;
Configure secure cluster communications&lt;br&gt;
Configure a highly-available Kubernetes cluster&lt;br&gt;
Perform cluster management. Drain, Cordon and Uncordon nodes.&lt;br&gt;
Core Concepts&lt;br&gt;
Understand the Kubernetes API primitives, cluster architecture, Services and other network primitives&lt;br&gt;
Know how to create namespaces, pods, describe pods&lt;br&gt;
Know how to export the pods spec as yaml/json file kubectl get pod pod_name -o json|yaml&lt;br&gt;
Know how to create deployments and expose services&lt;br&gt;
Know how to Create a multi container pod&lt;br&gt;
Practice how to filter the records using label selectors.&lt;br&gt;
Practice Output formatting using jsonpath. Practice jsonpath samples.&lt;br&gt;
Know how to monitor consumed CPU and Memory resources.&lt;br&gt;
Networking&lt;br&gt;
Understand the networking configuration on the cluster nodes&lt;br&gt;
Understand Pod networking concepts&lt;br&gt;
Understand Service Networking and practice how to expose pod and. deployments as service.&lt;br&gt;
Know Ingress and how to use Ingress rules&lt;br&gt;
Practice DNS for Services and Pods using nslookup&lt;br&gt;
Scheduling&lt;br&gt;
Understand label selectors to schedule Pods on nodes using nodeSelector and Practice Assign Pod Nodes&lt;br&gt;
Understand DaemonSets and how to provision. Remember there is no imperative way to create DaemonSet, so either create a deployment and filter of copy from the documentation.&lt;br&gt;
Understand how resource limits can affect Pod scheduling&lt;br&gt;
Understand how to run multiple schedulers and how to configure Pods to use them&lt;br&gt;
Practice how to Create Static Pods esp. on worker nodes. Static pods can be configured using yaml files located in staticPodPath referred by the kube-apiserver. Make sure the property is defined.&lt;br&gt;
Security&lt;br&gt;
Know how to configure authentication and authorization using CertificateSigningRequest and RBAC authorization&lt;br&gt;
Know how to configure network policies&lt;br&gt;
Practice manage TLS certificates in a Cluster&lt;br&gt;
Work with images securely using private repository&lt;br&gt;
Define security contexts&lt;br&gt;
Secure persistent key value store using Secrets. Practice passing Secrets to Pods using Volumes and Environment variables.&lt;br&gt;
Cluster Maintenance&lt;br&gt;
Understand Kubernetes cluster upgrade process&lt;br&gt;
Implement backup and restore methodologies&lt;br&gt;
Make sure you read ETCD backup and practice using documentation&lt;br&gt;
Facilitate operating system upgrades&lt;br&gt;
Logging / Monitoring&lt;br&gt;
Understand and know how to monitor all cluster components, applications, cluster and application logs&lt;br&gt;
Know resource usage monitoring as you would be needed to check resource usage using the kubectl top command&lt;br&gt;
Know how to Debug running pods using the kubectl logs command&lt;br&gt;
Storage&lt;br&gt;
Understand and focus on creating Persistent Volumes, Persistent Volume Claims and associating them with Pods&lt;br&gt;
Practice Configure a Pod to Use a Volume for Storage – focus on using Empty Dir as the volume, so the storage is ephemeral to pod.&lt;br&gt;
Practice Configure Pod Container Persistent Volume Storage – focus on creating Pods with host path volumes&lt;br&gt;
Exam does not cover other other volume types or storage class.&lt;br&gt;
Troubleshooting&lt;br&gt;
Practice Debug application for troubleshooting application failures&lt;br&gt;
Practice Debug cluster for troubleshooting control plane failure and worker node failure.&lt;br&gt;
Understand the control plane architecture.&lt;br&gt;
Focus on kube-apiserver, static pod config which causes the control panel pods to be referred and deployed.&lt;br&gt;
Check pods in kube-system if they are all running. Use docker ps -a command on the node to inspect the reason for exiting containers.&lt;br&gt;
Check kubelet service if the worker node is shown not ready&lt;br&gt;
Troubleshoot networking&lt;/p&gt;

&lt;h1&gt;
  
  
  Exam Score
&lt;/h1&gt;

&lt;p&gt;In order to pass this prep exam, you will need both of the following:&lt;/p&gt;

&lt;p&gt;A minimum score of 70 percent on the overall exam&lt;br&gt;
A minimum score of 33 percent on each exam domain&lt;br&gt;
Completing the Exam:&lt;/p&gt;

&lt;p&gt;You can spend as long as you wish on any particular question but must budget your time to complete all the questions. Any questions not completed in the allotted time will be scored as incorrect.&lt;/p&gt;

&lt;p&gt;During the session, you may skip questions and return to them later in the session. You can also review your answers to previous questions, and change them at any point during the exam session.&lt;/p&gt;

&lt;p&gt;You can end your exam session by submitting your exam for a score, or by discarding your exam.&lt;/p&gt;

&lt;p&gt;Submitting Your Exam&lt;br&gt;
You may submit your exam at any time by clicking "Submit Exam" and confirming your choice. Once confirmed, this completes the entire exam session. All of your answers will be scored, and your Skill Score will be updated. All unanswered questions will be scored as incorrect. You cannot return later to complete any unanswered questions.&lt;br&gt;
Discarding Your Exam&lt;br&gt;
You may discard your exam at any time by clicking "Discard Exam" and confirming your choice. Once confirmed, your exam session will be closed without saving your exam progress. None of your answers will be scored, and your Skill Score will not be updated. When you return to complete the exam, you will begin an entirely new exam session which may include new exam questions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes Multi Tenancy</title>
      <dc:creator>mwaghadhare</dc:creator>
      <pubDate>Sun, 02 Aug 2020 16:01:41 +0000</pubDate>
      <link>https://dev.to/mwaghadhare/kubernetes-multi-tenancy-1j6k</link>
      <guid>https://dev.to/mwaghadhare/kubernetes-multi-tenancy-1j6k</guid>
      <description>&lt;p&gt;What is a Kubernetes Tenant?&lt;br&gt;
The Kubernetes multi-tenancy SIG defines a tenant as representing a group of Kubernetes users that has access to a subset of cluster resources (compute, storage, networking, control plane and API resources) as well as resource limits and quotas for the use of those resources. Resource limits and quotas lay out tenant boundaries. These boundaries extend to the control plane allowing for grouping of the resources owned by the tenant, limited access or visibility to resources outside of the control plane domain and tenant authentication.&lt;/p&gt;

&lt;p&gt;What is multi-tenancy?&lt;br&gt;
A multi-tenant cluster is shared by multiple users and/or workloads which are referred to as "tenants". The operators of multi-tenant clusters must isolate tenants from each other to minimize the damage that a compromised or malicious tenant can do to the cluster and other tenants. Also, cluster resources must be fairly allocated among tenants.&lt;/p&gt;

&lt;p&gt;When you plan a multi-tenant architecture you should consider the layers of resource isolation in Kubernetes: cluster, namespace, node, pod, and container. You should also consider the security implications of sharing different types of resources among tenants. For example, scheduling pods from different tenants on the same node could reduce the number of machines needed in the cluster. On the other hand, you might need to prevent certain workloads from being colocated. For example, you might not allow untrusted code from outside of your organization to run on the same node as containers that process sensitive information.&lt;/p&gt;

&lt;p&gt;Although Kubernetes cannot guarantee perfectly secure isolation between tenants, it does offer features that may be sufficient for specific use cases. You can separate each tenant and their Kubernetes resources into their own namespaces. You can then use policies to enforce tenant isolation. Policies are usually scoped by namespace and can be used to restrict API access, to constrain resource usage, and to restrict what containers are allowed to do.&lt;/p&gt;

&lt;p&gt;There are two multi-tenancy models in Kubernetes: Soft and Hard multi-tenancy.&lt;/p&gt;

&lt;p&gt;Soft Multi-tenancy&lt;br&gt;
Soft multi-tenancy trusts tenants to be good actors and assumes them to be non-malicious. Soft multi-tenancy is focused on minimising accidents and managing the fallout if they do.&lt;/p&gt;

&lt;p&gt;Hard Multi-tenancy&lt;br&gt;
Hard multi-tenancy assumes tenants to be malicious and therefore advocates zero trust between them. Tenant resources are isolated and access to other tenant’s resources is not allowed. Clusters are configured in a way that isolate tenant resources and prevent access to other tenant’s resources.&lt;/p&gt;

&lt;p&gt;Why Muti Tenancy?&lt;br&gt;
When you start out with Kubernetes, usually what happens at a very high level is, you have a user, and the user interacts via a command-line tool or the API, or UI with a master. The master, as we just heard, runs the API server and the scheduler, and the controller. This master is responsible for orchestrating and controlling the actual cluster. The cluster consists of multiple nodes that you schedule your pods on, Let's say these nodes are machines or virtual machines, or whatever the case may be. Usually, you have one logical master that controls one single cluster. Looks relatively straightforward. When you have one user and one cluster, that's what it is.&lt;br&gt;
Now, what happens when you start having multiple users? Let's say your company decides to use Kubernetes for a variety of maybe internal applications, and so you have one developer over here, creating their Kubernetes cluster, and you have another one over here creating their Kubernetes cluster, and your poor administrators now have to manage two of them. This is starting to get a little bit more interesting. Now you have two completely separate deployments of Kubernetes with two completely separate masters and sets of nodes. Then, before you know it, you have something that looks more like this. You have a sprawl of clusters. You get more and more clusters that you now have to work with.&lt;/p&gt;

&lt;p&gt;What happens now, some people call this cube sprawl, this is actually a pretty well-understood phenomenon at this point. What happens now is, I will ask you two questions of how does this scale? Let's think a little bit about how this model scales financially. How much does it cost you to run these clusters? The first thing that might stand out is that you now have all of these masters hanging out. Now you have to run all these masters. In general, it is best practice, not to run just one master node, but three or six, so that you get better high availability. If one of them fails, the other ones can take over. When you look at all these masters here, they're not one single node normally per master, they're usually three. This is starting to look a little bit more expensive. That's number one.&lt;/p&gt;

&lt;p&gt;Then number two, one of the things that we see a lot is, we see the customers that say, "I have all of these applications, and some of them run during the day, and they take user traffic." They need a lot of resources during the day, but they really lie idle at night. They don't really do anything at night, but you have all these nodes.&lt;/p&gt;

&lt;p&gt;Then you have some applications that are batch applications, maybe back processing of logs or whatever the case may be, and you can run them at any time you want. You could run them at night, you could have this model where some applications run during the day and then the other applications run at night, and uses the same nodes. That seems reasonable. With this model, where you have completely separate clusters on completely separate nodes, now, you've just made that much harder for yourself. That's one consideration.&lt;/p&gt;

&lt;p&gt;Another consideration that people bring up a lot is operational overhead, meaning how hard it is to operate all of these clusters. If you've been in a situation like this before, maybe not even with Kubernetes, what you will have noticed is that oftentimes what happens is that all of these clusters look very similar at the beginning, maybe they run very different applications, but the Kubernetes cluster, like the masters are all at the same version of Kubernetes, and so forth, but over time, they tend to drift. They tend to become all of these special snowflakes. The more you have these special snowflakes, the harder it is to operate them. You get alerts all the time, and you don't know, is it like a specific version, and you have to do a bunch of work. Now we have tens or hundreds of sets of dashboards to look at, to figure out what's going on. This now becomes operationally very difficult and actually ends up slowing you down.&lt;/p&gt;

&lt;p&gt;Now, with all of that being said, there is a model that is actually a very appropriate model under some circumstances. Lots of people choose this model, maybe not for hundreds or thousands, but lots of people choose this model of having completely separate clusters because it has some advantages, such as being easier to reason about and having very tight security boundaries. Let's say you're in this situation, and you have hundreds of clusters, and it's becoming just this huge pain. One thing you can consider is what we call multi-tenancy in Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OPFvrdvH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ulukqzliaj9su0qj1cbo.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OPFvrdvH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ulukqzliaj9su0qj1cbo.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Challenges of Kubernetes multi-tenancy:&lt;br&gt;
Namespace isolation&lt;br&gt;
A basic best practice for handling multiple tenants is to assign each tenant a separate namespace. Kubernetes was designed for this approach. Most of the isolation features that it provides expect you to have a separate namespace for each entity that you want to isolate.&lt;/p&gt;

&lt;p&gt;Keep in mind, too, that in some cases it may be desirable to assign multiple namespaces to the same group within your Kubernetes deployment. For example, the same team of developers might need multiple namespaces for hosting different builds of their application.&lt;/p&gt;

&lt;p&gt;Adding namespaces is relatively easy (it takes just a simple kubectl create namespace your-namespace command), and it’s always better to have the ability to separate workloads in a granular way using namespaces than to try to cram different workloads with different needs into the same namespace.&lt;/p&gt;

&lt;p&gt;Block traffic between namespaces&lt;br&gt;
By default, most Kubernetes deployments allow network communication between namespaces. If you need to support multiple tenants, you’ll want to change this in order to add isolation to each namespace.&lt;/p&gt;

&lt;p&gt;Resource Quotas&lt;br&gt;
When you want to ensure that all Kubernetes tenants have fair access to the resources that they need, Resource Quotas are the solution to use. As the name of this feature implies, it lets you set quotas on how much CPU, storage, memory, and other resources can be consumed by all pods within a namespace.&lt;/p&gt;

&lt;p&gt;Secure your nodes&lt;br&gt;
A final multi-tenancy best practice to keep in mind is the importance of making sure that your master and worker nodes are secure at the level of the host operating system.&lt;/p&gt;

&lt;p&gt;Node security doesn’t reinforce namespace isolation in a direct way; however, since an attacker who is able to compromise a node on the operating system level can potentially use that breach to take control of any workloads that depend on the node, node security is important to keep in mind. (It would be important in a single-tenant environment too, but it’s even more important when you have multiple workloads, which makes the security stakes higher.)&lt;/p&gt;

&lt;p&gt;Multi-tenancy – all the way&lt;br&gt;
An important aspect of multi-tenancy is having multi-tenancy at a layer above kubernetes cluster – so that your DevOps and developers can have one or more clusters belonging to different users or teams of users within your organization. This concept isn’t built into Kubernetes itself. Platform9 supports this by adding a layer of multi-tenancy on top of Kubernetes via the concept of ‘regions’ and ‘tenants’. A region in Platform9 maps to a geographical location. A tenant can belong to multiple regions. A group of users can be given access to one or more tenants. Once in a tenant, the group of users can create one or more clusters, that will be isolated and accessible only to the users within that tenant. This provides separation of concerns across different teams and departments.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v6cPWauR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/40agm2lgkvato52gxl52.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v6cPWauR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/40agm2lgkvato52gxl52.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recommended best practices for Multi-tenant Kubernetes clusters:&lt;br&gt;
1) Limit Tenant’s use of Shared Resources&lt;br&gt;
2) Enable Built-in Admission Controllers&lt;br&gt;
3) Isolate Tenant Namespaces using Network Policy&lt;br&gt;
4) Enable RBAC&lt;br&gt;
5) Create Cluster Personas&lt;br&gt;
6) Map Kubernetes Namespaces to Tenants&lt;br&gt;
7) Categorize Namespaces&lt;br&gt;
8) Limit Tenant’s Access to non-namespaced Resources&lt;br&gt;
9) Limit Tenant’s Access to Resources from other Tenants&lt;br&gt;
10) Limit Tenant’s Access to Multi-tenancy Resources&lt;br&gt;
11) Prevent use of HostPath Volumes&lt;br&gt;
12) Run Multi-tenancy e-2-e Validation Test&lt;/p&gt;

</description>
      <category>github</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The Best Kubernetes Tools For Managing Large Scale Projects and Cost optimization tool</title>
      <dc:creator>mwaghadhare</dc:creator>
      <pubDate>Sun, 02 Aug 2020 15:32:25 +0000</pubDate>
      <link>https://dev.to/mwaghadhare/the-best-kubernetes-tools-for-managing-large-scale-projects-and-cost-optimization-tool-356</link>
      <guid>https://dev.to/mwaghadhare/the-best-kubernetes-tools-for-managing-large-scale-projects-and-cost-optimization-tool-356</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
Kubernetes raised the bar on the competition. Now a mature technology, organizations across the globe are increasingly embracing a software development strategy focused on container-oriented microservices. Kubernetes is popular in the industry and industry leaders are helping it grow further, creating robust frameworks, and a Kubernetes core-based ecosystem. Because of its ability to meet the most diverse requirements and constraints an application can build, it’s firmly set as the most common open-source container orchestration framework.&lt;/p&gt;

&lt;p&gt;In this article, we’ll take a look at the best tools for Kubernetes. These tools will compliment K8s and boost your development operations so you can get more from Kubernetes.&lt;/p&gt;

&lt;p&gt;Kubernetes Deployment Tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Helm:
Helm is a newer configuration management tool within the Kubernetes world. It uses a YAML file form called Charts which are similar to a Debian, an Apt, or a Yum RPM. Charts are used to describe, install, and update Kubernetes. They are prototypical, and support even the most complex Kubernetes services. Charts are thoughtfully built to be easily produced and maintained. They can be exchanged, used for Kubernetes publishing, and contain a kit description and at least one example. Templates contain manifest files on Kubernetes and can be reused several times for deployment. If more than one instance of the same chart is mounted, a new release will be produced.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2.Apollo:&lt;br&gt;
Apollo offers a Kubernetes Control UI that allows logs to be viewed and you can revert to a deployment version with just a simple click. It also offers a pattern of versatile permissions and is a lightweight tool for continuous deployment. Apollo can add to any existing construction cycle and only needs to be informed of a "ready artifact." This Kubernetes management tool enables users to control several Kubernetes clusters. These clusters can have different namespaces. The live querying function lets you show the latest deployment status and allows visualization of pod status, reading logs, and restarting pods.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kubespray:
Kubespray is a Kubernetes management tool that works through Ansible roles. It supports AWS, Google Cloud Environment, Azure, and OpenStack. Kubespray benefits those familiar with Ansible, but with a slight learning curve, making both provisioning and management possible through a single tool. Kubespray enables continuous integration tests and support is available for most Linux distros.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Best Kubernetes CLI Tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Kubectl:&lt;br&gt;
Kubectl is the default Kubernetes CLI Tool and supports all of the Kubernetes based operations. Nodes are detected in the $HOME directory via the config file. Kubectl accepts additional kubeconfig files as well. Simply set the variable to the appropriate location - you can do this with the –kubeconfig flag, too. Docker users can communicate with the API server using kubectl. Kubectl commands are similar to Docker commands, with just a few small variations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubectx:&lt;br&gt;
Both of these Kubernetes instruments are accessible via a shared repo. Over kubectl they have additional functions. In multi-cluster environments, kubectx is a useful method that can be used to switch context among clusters. One major benefit of kubectx is the ability to disguise cluster names. This feature enables context switching with the "kubectx [disguise]" command. kubectx knows the previous context. This memory allows "kubectx-." to turn back (note: kubectx isn't available for Windows).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kube-shell:&lt;br&gt;
Kube-shell can be used to complement kubectl - it’s formed on top of kubectl and improves performance by rendering commands auto-complete. It suggests commands based on certain values that are typed. Kube-shell includes explanations in-line until the commands are executed. Another critical feature is cycling from previous functions, which can be achieved by clicking the arrow keys.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What Are Kubernetes Security Tools?&lt;br&gt;
The security requirements of containers are special. They diverge from other hosting styles, such as VPS. The explanation for that is that they have to protect more layers. These involve images of the container runtime, the orchestrator, and the program. Some advanced resources are set out below:&lt;/p&gt;

&lt;p&gt;Twistlock:&lt;br&gt;
Twistlock is a container protection solution with a complete life cycle. It has a VMS, which checks for any vulnerabilities, by continuously scanning Kubernetes, and there's even an Automated Firewall. Another essential function of Twistlock is the scanning of container images. Support for the components Node.js and Docker images is available. Twistlock focuses on two critical aspects of container protection. First, it continuously scans container images, as every day new data threats arise. Next, it focuses on the health of containers that operate. We must first set a standard for normal behavior that can be easily tracked afterward.&lt;/p&gt;

&lt;p&gt;Aqua:&lt;br&gt;
Before deployment, Aqua Protection scans the container images. This feature lets you read-only the picture. Immutable images are less vulnerable to threats. Often it allows phenomena to be quickly noticed. These scans are performed in every part of the application. One of its key functions is to protect multi-tenancy environments. Aqua performs this function while ensuring that tenants remain isolated. Isolation applies to both access and data. It scans for multiple security problems, such as established risks, hidden codes, and malware.&lt;/p&gt;

&lt;p&gt;Falco:&lt;br&gt;
A targeted security tool from Kubernetes which detects unusual activity in your containers. It is derived from the Sysdig Project and has become a staple of commerce. Falco controls containers that concentrate mainly on device calls to the kernels. They’re using a common set of rules for the control of several container layers to include the container, the program, the host, and the network itself.&lt;/p&gt;

&lt;p&gt;Kubernetes cost allocation and Capacity Planning allocation Tools&lt;br&gt;
First, let’s quickly go over why we always start with resource/cost allocation before helping teams optimize their resources. We do it because 1) it directly uncovers common patterns that create overspending on infrastructure assets, not to mention other undesirable issues within a Kubernetes cluster and 2) it helps teams prioritize where to focus their optimization efforts. The root cause of these negative patterns has ranged from the mundane (abandoned deployments) to the startling (bitcoin mining malware).&lt;/p&gt;

&lt;p&gt;Kubernetes Opex Analytics&lt;br&gt;
Kubernetes Opex Analytics is a tool to help organizations track the resources being consumed by their Kubernetes clusters to prevent overpaying. To do so it generates, short-, mid- and long-term usage reports showing relevant insights on what amount of resources each project is spending over time. The final goal being to ease cost allocation and capacity planning decisions with factual analytics.&lt;/p&gt;

&lt;p&gt;For more details how it works and it's dashboard go through this url: &lt;a href="https://github.com/rchakode/kube-opex-analytics"&gt;https://github.com/rchakode/kube-opex-analytics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubecost&lt;br&gt;
Kubecost enables teams to view the following with an install that takes only minutes: Real-time cost allocations by all key k8s concepts, e.g. spend namespace, deployment, service, daemonset, pod, container, job, etc. Cost allocation by configurable labels to measure spend by owner, team, department, product, etc. Dynamic asset pricing enabled by integrations with AWS and GCP billing APIs, estimates available for Azure Cost allocation metrics for CPU, GPU, memory, and storage Out of cluster cloud costs tied back to owner, e.g. S3 buckets and RDS instance allocated to pod/deployment&lt;/p&gt;

&lt;p&gt;The core Kubecost allocation model is open source (Apache 2) and can now be found on Github. You can deploy it as a pod directly on your cluster if you want to run the model yourself or make modifications. You can also install the full Kubecost product (w/ associated dashboards) via a single Helm install on our website.&lt;/p&gt;

&lt;p&gt;Application Kube resources report&lt;br&gt;
&lt;a href="https://kube-resource-report.demo.j-serv.de/application-kube-resource-report.html"&gt;https://kube-resource-report.demo.j-serv.de/application-kube-resource-report.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>github</category>
    </item>
  </channel>
</rss>
