<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kyle Hunter</title>
    <description>The latest articles on DEV Community by Kyle Hunter (@kylekhunter).</description>
    <link>https://dev.to/kylekhunter</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kylekhunter"/>
    <language>en</language>
    <item>
      <title>Unlocking Four Requirements for Enterprise-Grade Kubernetes</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Wed, 10 Aug 2022 20:26:04 +0000</pubDate>
      <link>https://dev.to/kylekhunter/unlocking-four-requirements-for-enterprise-grade-kubernetes-1in3</link>
      <guid>https://dev.to/kylekhunter/unlocking-four-requirements-for-enterprise-grade-kubernetes-1in3</guid>
      <description>&lt;p&gt;With more organizations enjoying the benefits of Kubernetes, it is all the more crucial to integrate enterprise-grade Kubernetes across the business pipeline. In this article, Kyle Hunter, head of product marketing, Rafay Systems, identifies some of the critical requirements and best practices for K8s to meet those requirements.&lt;/p&gt;

&lt;p&gt;For enterprises of all shapes and sizes, Kubernetes has become a go-to-choice for shipping software and improving delivery time, visibility, and control of CI/CD workflows. But integrating enterprise-grade Kubernetes management practices that cover your entire pipeline – from code to cloud – can be challenging. Critical requirements demand best practices for K8s to meet those requirements. These critical items will span four key topic areas – source code, CI/CD integration, Kubernetes cluster lifecycle management, and workload administration. Let’s get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Source Code
&lt;/h2&gt;

&lt;p&gt;When it comes to source code, it all starts with using Git-based workflows for automated software delivery and declarative infrastructure with tracking and to support rollbacks when there are failures. It is a best practice to keep secrets encrypted and outside the container. Implementing internal training and awareness programs is the best way to ensure this is happening is relatively simple. By ensuring this is a best practice for your organization and becomes a routine part of the development process, you avoid exposing them during a CI/CD deployment. Similarly, it is essential to ensure that application secrets are not embedded into your Helm charts or Kubernetes YAML files.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;Moving on to your CI/CD pipeline, one critical item to establish a sound security posture, especially before being used in production. It is a best practice to test and scan container images for vulnerabilities before they are uploaded to your container registry (or repo). Many tools on the market help with this which can be embedded directly into your CI/CD pipeline. This ensures that this critical action is taken every time and can be an automated part of your development process. This can also help meet another critical requirement, which is to ensure that you have a review and approval process for third-party container images. The same tools you use for your own container images can scan these third-party images before using them in your cluster. For your container base OS, it’s a good idea to use a lightweight base operating system for your container image, and it should include the required shell as well as debugging tools. However, which OS is customer dependent and must be chosen case-by-case for your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Cluster Lifecycle Management
&lt;/h2&gt;

&lt;p&gt;At the Kubernetes cluster level, there are many critical requirements for enterprise-level K8s management – so many that coming up with an exhaustive list would turn this into an eBook. With that in mind, we are going to focus on the most critical items from our customers – but keep in mind this is not a complete list.&lt;/p&gt;

&lt;p&gt;First, look at your cluster configuration as it relates to HA (high availability). To achieve enterprise-grade Kubernetes, ensure that your K8s master is architected and deployed in a multi-master HA configuration to avoid a single point of failure. Fortunately, many top managed K8s providers (like Amazon EKS) make this simple by deploying the K8s master in a HA configuration with three masters across availability zones (AZs). When it comes to upstream K8s, there are some solutions that deploy upstream K8s in a multi-master HA configuration by default.&lt;/p&gt;

&lt;p&gt;When it comes to K8s versioning and upgrading, creating an upgrade strategy that fits your needs with the environment’s availability and reliability is critical to ensuring minimal disruptions to your workloads. There are frequent updates to Kubernetes and its ecosystem for security updates, bug fixes, and new features being rolled out. It is a good idea to regularly upgrade your clusters to the version that meets the quality and stability. However, doing so requires a reliable, repeatable, and efficient upgrade process and tooling. Built-in monitoring tools help ensure your administrators have complete visibility and insight into your Kubernetes environments and can promote upgrades in a controlled and predictable manner – another key to K8s success.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workload Administration
&lt;/h2&gt;

&lt;p&gt;Finally, let’s address some important topics around workload administration. A workload is an application running in one or more K8s pods. It’s a best practice to develop a labelling scheme that helps simplify management and consistency using parameters such as location, which could be a physical location (e.g., country, city) or cloud provider, an environment (e.g., Dev, Test, Prod), the application (e.g., Finance, CRM, HR), and the role (e.g., Web, DB). Having consistent labelling across all K8s environments makes policies more flexible and effective.&lt;/p&gt;

&lt;p&gt;To keep clusters healthy, it is also essential to specify resource requests and limits (e.g., CPU, memory, namespaces). Resource quotas, for example, help guarantee compute resources while helping to control costs. With best-in-class monitoring tools, you can see any pods that are misconfigured and address them as needed.&lt;/p&gt;

&lt;p&gt;Another critical topic is access control. Integrating with Kubernetes Role-Based Access Control (RBAC) is a best practice and defining cluster-wide permissions. Securing access to the Kubernetes API is also critical to controlling and limiting who can access clusters and what actions they are allowed to perform as the first line of defense. Identifying “who” needs “what” access to “which” resource becomes challenging, especially at scale, leading many to look for a unified way of managing access across clusters and clouds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modernization Simplified
&lt;/h2&gt;

&lt;p&gt;Kubernetes offers the promise of modernization by simplifying the deployment and management of container-based workloads on-premises, in the public cloud and at the edge. Enterprises have widely deployed containers but are still looking to solve for the agility and ultimately business value primarily due to operational challenges — it’s challenging to operate and manage Kubernetes clusters at scale. Organizations should consider these best practices and a centralized SaaS platform that is easily scalable and fully supports the K8s technology ecosystem allowing for an easier path to adoption without having to do it all yourself.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>programming</category>
      <category>enterprise</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Securing Access to Kubernetes Environments with Zero Trust</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Fri, 29 Jul 2022 21:51:24 +0000</pubDate>
      <link>https://dev.to/kylekhunter/securing-access-to-kubernetes-environments-with-zero-trust-60k</link>
      <guid>https://dev.to/kylekhunter/securing-access-to-kubernetes-environments-with-zero-trust-60k</guid>
      <description>&lt;p&gt;Modern IT environments are becoming more dynamic by the day. Kubernetes, for example, is pushing the boundaries of what’s possible for many IT organizations.&lt;/p&gt;

&lt;p&gt;The benefits of the open source technology to automate deployment, scalability and management of containerized applications are numerous. In particular, IT teams are taking advantage of its power, efficiency and flexibility to develop modern applications quickly and deliver them at scale.&lt;/p&gt;

&lt;p&gt;However, the process of ensuring hardened security practices in Kubernetes environments is a growing challenge. As a more significant number of development and production Kubernetes clusters spread across on-premises data centers, multiple public cloud providers and edge locations, this relatively new and dynamic operating model creates major complexity for controlling access.&lt;/p&gt;

&lt;p&gt;Since most teams have multiple clusters running in multiple locations — oftentimes with different distributions with management interfaces — enterprise IT needs to account for the teams of developers, operators, contractors and partners who require varying levels of access.&lt;/p&gt;

&lt;p&gt;Given the distributed and expansive nature of Kubernetes, IT has to do everything possible to ensure access security to avoid the &lt;a href="https://techgenix.com/5-kubernetes-security-incidents/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;mistakes that are happening&lt;/a&gt;. Below, we’ll look at how to apply Kubernetes zero-trust principles to secure an entire environment, providing zero-trust security for containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zero-Trust Access for Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;As a security model that automatically assumes all people, systems and services operating in and between networks cannot be trusted, zero trust is emerging as the best technique to prevent malicious attacks. Based on authentication, authorization and encryption technologies, the purpose of zero trust is to continuously validate security configurations and postures to ensure trust across an environment.&lt;/p&gt;

&lt;p&gt;Here’s a basic understanding of how Kubernetes works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The core of the Kubernetes control plane for each cluster is the Kubernetes API server.&lt;/li&gt;
&lt;li&gt;API calls are used to query and manipulate the state of all Kubernetes objects.&lt;/li&gt;
&lt;li&gt;Kubernetes objects include namespaces, pods, configuration maps and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Controlling access to the use of APIs is the critical function to managing Kubernetes access and accomplishing zero trust. The first step in securing access to Kubernetes clusters is to protect traffic to and from the API server with Transport Layer Security (TLS).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RqUXatZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98k1nhfxhugly2xfp24i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RqUXatZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98k1nhfxhugly2xfp24i.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;API server best practices for implementing zero trust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable TLS everywhere.&lt;/li&gt;
&lt;li&gt;Use a private endpoint for the API server.&lt;/li&gt;
&lt;li&gt;Use third-party authentication for the API server.&lt;/li&gt;
&lt;li&gt;Close firewall inbound rules to the API server, ensuring it is cloaked and not directly accessible from the Internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After securing the transport layer, Kubernetes also includes the necessary hooks to implement zero-trust and control API server access for each Kubernetes cluster. These hooks represent four critical areas of a hardened security posture for Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication&lt;/li&gt;
&lt;li&gt;Authorization&lt;/li&gt;
&lt;li&gt;Admission control&lt;/li&gt;
&lt;li&gt;Logging and auditing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Authentication for Kubernetes
&lt;/h2&gt;

&lt;p&gt;With zero trust, all user-level and service-oriented accounts tied to Kubernetes clusters must be authenticated before executing an API call. Security modules and plugins are widely available for Kubernetes to ensure that the platform will operate effectively with a team’s preferred authentication system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP Basic Auth&lt;/li&gt;
&lt;li&gt;Authentication Proxy (to support LDAP, SAML, Kerberos, etc.)&lt;/li&gt;
&lt;li&gt;Client certificates&lt;/li&gt;
&lt;li&gt;Bearer tokens&lt;/li&gt;
&lt;li&gt;OpenID Connect tokens&lt;/li&gt;
&lt;li&gt;Webhook Token authorization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Common best practices for authentication include enabling at least two authentication methods (multifactor authentication or MFA) and the rotation of client certificates regularly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authorization for Kubernetes
&lt;/h2&gt;

&lt;p&gt;Allowing every user or service account with authenticated access to carry out any possible action in a Kubernetes cluster must be mitigated. With zero trust, the idea is that a request can only be authorized if an authenticated user has the necessary permission to complete the requested action. For each request made, this model will require specification of the username, action and the objects affected in the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;There are numerous methods that Kubernetes supports for authorization, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attribute-based access control, or ABAC, authorizes access dynamically based on a combination of user, environment and resource attributes.&lt;/li&gt;
&lt;li&gt;Role-based access control, or RBAC, authorizes access based on the user’s role in the organization, such as developer, admin, security, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Organizations most commonly use RBAC, as its practical nature allows for easier management controls and provides the granularity needed for most use cases. It is common within the industry to enable RBAC with the least privilege.&lt;/p&gt;

&lt;p&gt;ABAC can provide additional granularity but requires additional time and resources to define and configure properly. However, troubleshooting an issue can be more challenging with the ABAC method. Therefore, it is common to enable RBAC with the least privilege.&lt;/p&gt;

&lt;h2&gt;
  
  
  Admission Control for Kubernetes
&lt;/h2&gt;

&lt;p&gt;Admission controllers provide a way to implement business logic to refine a zero-trust approach to Kubernetes. The purpose of admission controllers is to enable the system to automatically act on requests that create, modify, delete or connect to Kubernetes objects. Enabling multiple admission controllers may be necessary to fit your organization’s needs, and if any one of them rejects a particular request, the system automatically rejects it as well.&lt;/p&gt;

&lt;p&gt;The variety of built-in admission controllers available today allows teams plenty of options for enforcing policies and implementing various actions. Dynamic controllers enable the rapid modification of requests to adhere to established rule sets. For example, the ResourceQuota admission controller observes incoming requests and ensures they don’t violate the constraints that have been listed in the ResourceQuota object for a namespace. See Using &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;Admission Controllers&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging and Auditing for Kubernetes
&lt;/h2&gt;

&lt;p&gt;Essential to a Kubernetes security posture, auditing capabilities provide a track record of the actions performed within a cluster. These capabilities can enable tracking of any action by any user, application and the control plane itself.&lt;/p&gt;

&lt;p&gt;There are four different types of audit levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;None – Don’t log this event&lt;/li&gt;
&lt;li&gt;Metadata – Log request metadata&lt;/li&gt;
&lt;li&gt;Request – Log event metadata and the request&lt;/li&gt;
&lt;li&gt;RequestResponse – Log event metadata, the request and the response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to specifying audit levels, teams can also control where the audited events are being logged. As the log backend authors events to the cluster’s local filesystem, the webhook backend sends audit events to an external logging system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling Zero-Trust Architecture
&lt;/h2&gt;

&lt;p&gt;While the different methods and practices described above provide the ability to create a zero-trust environment, configuring and aligning these individual elements properly becomes a more significant challenge when a Kubernetes footprint expands beyond a few clusters. Things get especially complicated when multiple workloads and Kubernetes distributions are involved. This challenge is not new, but is shared by many companies today.&lt;/p&gt;

&lt;p&gt;For example, let’s consider a scenario where a company is managing 100 Kubernetes clusters — ranging from development to QA to staging to prod — and the clusters are required to be geographically close to its global customer base for applications to work with real-time streams of video and audio data.&lt;/p&gt;

&lt;p&gt;There are three problems this company could encounter with regard to ensuring secure user access to Kubernetes clusters:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Assuming this company has a few hundred developers and a few dozen IT operations personnel, the painstaking task of manually adding and removing users from each cluster can create more problems than it solves.&lt;/li&gt;
&lt;li&gt;If, or more likely when, an incident occurs, the time it takes to remediate is critical. If access methods take those who troubleshoot the problem several minutes just to get logged into an affected cluster, problems could multiply.&lt;/li&gt;
&lt;li&gt;With log data spread across 100 clusters, the ability to have a holistic view of auditing and compliance reporting might be impossible.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Considerations for the Platform Team
&lt;/h2&gt;

&lt;p&gt;One of the many goals of an enterprise’s platform team is to help enable a globally distributed IT team that manages user access across all its clusters from a central location. The intention is to secure and govern access to a Kubernetes infrastructure effectively while making audit logging and compliance reporting much simpler.&lt;/p&gt;

&lt;p&gt;A platform team should consider implementing zero trust for Kubernetes to ensure that the best practices described earlier are applied and enforced to secure an entire Kubernetes environment. By eliminating the need to manually apply best practices on every cluster, the IT organization can operate Kubernetes at scale with far less risk.&lt;/p&gt;

&lt;p&gt;Here are three benefits for a platform team to consider when designing zero trust for Kubernetes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make RBAC ultra-flexible: If a team member changes roles, access permissions should be updated automatically so that no single person ever has too much or too little access.&lt;/li&gt;
&lt;li&gt;Make accessibility fast and streamlined: Eliminate delayed access to any cluster by providing an authorized user seamless access via secure single sign-on.&lt;/li&gt;
&lt;li&gt;Credentials for just-in-time scenarios: Service accounts for authorized users should be created on remote clusters with “just-in-time” access and removed automatically after the user logs out, thereby eliminating the chance of out-of-date credentials.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As the number of Kubernetes clusters and containerized applications expands, an organization is increasingly exposed to security risks that are not evident when operating just one or two clusters. As a result, platform teams need to enable a central, enterprise-grade level of security and control for both clusters and applications across their entire Kubernetes infrastructure.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Bring Shadow Kubernetes IT into the Light</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Fri, 29 Jul 2022 21:28:19 +0000</pubDate>
      <link>https://dev.to/kylekhunter/how-to-bring-shadow-kubernetes-it-into-the-light-3p18</link>
      <guid>https://dev.to/kylekhunter/how-to-bring-shadow-kubernetes-it-into-the-light-3p18</guid>
      <description>&lt;p&gt;Shadow IT continues to be a challenge for IT leaders, but perhaps not in the sense that companies have seen in the past. Traditionally, shadow IT occurs within the application stack, which creates problems because the use of IT systems occurs without the approval, or even knowledge, of the corporate IT department.&lt;/p&gt;

&lt;p&gt;DevOps practices have emerged to help address these challenges and to unleash creativity and opportunity for modern software delivery teams. However, access to the cloud has made it easier for autonomous teams to set up their own tool sets. As a result, the shadow IT problem now manifests itself in a new way: in the tooling architecture.&lt;/p&gt;

&lt;p&gt;The explosion of container-based applications has made Kubernetes a vital resource for DevOps teams. But its &lt;a href="https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;widespread adoption&lt;/a&gt; has led to the rapid creation of Kubernetes clusters with little regard for security and costs, either because users don’t understand the complex Kubernetes ecosystem or are simply moving too fast, in order to meet deadlines.&lt;/p&gt;

&lt;p&gt;This article explores the challenges associated with shadow Kubernetes admins and the benefits of centralizing with the IT department.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Shadow Kubernetes Admins?
&lt;/h2&gt;

&lt;p&gt;A shadow Kubernetes admin is a user who doesn’t wait for their IT department to provision Kubernetes clusters and instead turns to a cloud service provider to spin up Kubernetes clusters at will. Indeed, the freedom and flexibility of the cloud brings some significant business risks that IT leaders cannot ignore.&lt;/p&gt;

&lt;p&gt;A shadow Kubernetes admin account left unattended could lead to unexpected grant privileges when new roles are created. This happens because role bindings can refer to roles that no longer exist if the same role name is used. And with every new user, group, role and permission, lack of proper visibility and control increases the risk of human error, mismanagement of user privileges and malicious attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reigning in Shadow Kubernetes Admins
&lt;/h2&gt;

&lt;p&gt;To gain control of shadow Kubernetes admins, we first must understand the challenges IT teams face.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limited Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, it can be very difficult to set up role-based access control (RBAC) for Kubernetes. Natively, it supports a number of different role types and assignment options, but these are hard to manage and track. As a result, Kubernetes admins must set up everything manually, cluster by cluster, and effectively provide the right level of access that each user needs within a cluster.&lt;/p&gt;

&lt;p&gt;Since Kubernetes is still a relatively new technology, there is an inherent talent gap in finding staff with the necessary skills and experience to administer and manage these environments properly. Now imagine the complexity of trying to scale and operate a distributed, multicluster, multicloud environment with that level of manual overhead. Not only is the process labor-intensive, but ripe for mistakes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Flexibility and On-Demand Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Running container-based applications in production goes well beyond Kubernetes. For example, IT operations teams often require additional services for tracing, logs, storage, security and networking. They may also require different management tools for Kubernetes distribution and compute instances across public clouds, on-premises, hybrid architectures or at the edge.&lt;/p&gt;

&lt;p&gt;Integrating these tools and services for a specific Kubernetes cluster requires that each tool or service is configured according to that cluster’s use case. The requirements and budgets for each cluster are likely to vary significantly, meaning that updating or creating a new cluster configuration will differ based on the cluster and the environment. As Kubernetes adoption matures and expands, there will be a direct conflict between admins, who want to lessen the growing complexity of cluster management, and application teams, who seek to tailor Kubernetes infrastructure to meet their specific needs.&lt;/p&gt;

&lt;p&gt;What magnifies these challenges even further is the pressure of meeting internal project deadlines — and the perceived need to use more cloud-based services to get the work done on time and within budget. If jobs are on the line, people will inevitably do whatever they feel they must do to get the work done, even if it means using tools and methods from outside the centralized IT system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Centralizing with Platform Teams
&lt;/h2&gt;

&lt;p&gt;While shadow Kubernetes IT is a growing challenge that IT must control, hindering the productivity of development and operations teams is a nonstarter. Through a centralized platform team model, however, IT can manage and enforce its own standards and policies for Kubernetes environments and prevent shadow admins altogether. IT can allow multiple teams to run applications on a common, shared infrastructure that is managed, secured and governed by the enterprise platform team. Doing so can provide the following benefits:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standardization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Define and maintain preapproved cluster and application configurations that can be reused across infrastructure and tooling architecture. Doing so not only reduces the complexity of manual cluster management, but by centrally standardizing these configurations, it enables development and operations teams to automate workflows and accelerate delivery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repeatability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create and provide multiple pipelines with predefined workflows and approvals across Kubernetes clusters and application deployments to create consistency from a self-service model throughout the organization. Clearly defined and repeatable processes help to scale Kubernetes environments and optimize resources for project deliverables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enable downstream configurable access control to clusters and workloads so developers and operations can connect users or groups appropriately. Integrating preexisting security practices and other centralized systems with cluster and application lifecycle management operations becomes the norm. RBAC techniques and user-based audit logs across clusters and environments helps manage authorization and prevent errors that lead to attacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Define and maintain preapproved cluster and backup and recovery policies to be enforced across the rest of the organization. Enforce best practices and reject requests for your Kubernetes infrastructure and applications in order to comply with corporate policies and industry regulations such as HIPAA and PCI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Considerations
&lt;/h2&gt;

&lt;p&gt;Enabling a centralized view into all clusters across any environment, including on-premises and public cloud environments, such as AWS, Azure, GCP and OCI, can also help address issues faster. By upleveling issues from one unified source of truth, individuals and teams can collectively pinpoint the information causing any health or performance issues related to clusters. Gaining real-time insights into cluster health and performance helps teams optimize Kubernetes and stay within budget.&lt;/p&gt;

&lt;p&gt;Due to the open nature of Kubernetes, it is very easy to make mistakes on clusters that can lead to security risks and deployment issues. Also, security breaches are likely to happen over time. There is a constant need for maintenance, application of patches and upgrades on any type of environment. Recently, &lt;a href="https://apiiro.com/blog/malicious-kubernetes-helm-charts-can-be-used-to-steal-sensitive-information-from-argo-cd-deployments/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;Apiiro uncovered a serious vulnerability&lt;/a&gt; that gave attackers the opportunity to access sensitive information, such as secrets, passwords and API keys. A centralized platform can help the organization prepare for incidents proactively by maintaining a high level of control, security posture, policy compliance and audibility across the organization.&lt;/p&gt;

&lt;p&gt;Kubernetes, while a powerful technology, can bring many operational challenges to enterprise platform teams. With the menace of shadow Kubernetes IT growing, platform teams have a challenging road ahead to deliver a solution that enables developer productivity, centralizes governance and policy management, and reduces operational overhead.&lt;/p&gt;

&lt;p&gt;Thankfully, there are SaaS platform solutions available that allow platform teams to focus primarily on delivering modern applications, not on managing and operating Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rafay.co/platform/kubernetes-operations-platform/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;Rafay’s Kubernetes Operations Platform&lt;/a&gt;, for example, works with any infrastructure and provides deep integrations with Kubernetes distributions to accelerate the operational readiness of platform teams to manage, secure and govern Kubernetes at scale within hours. With Rafay, enterprises take advantage of the numerous platform services, such as multicluster management, GitOps, zero-trust access service, Kubernetes policy management, backup and restore, and visibility and monitoring.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>kubernetes</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Shared Kubernetes Clusters for Hybrid and Multicloud</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Tue, 26 Jul 2022 19:36:00 +0000</pubDate>
      <link>https://dev.to/kylekhunter/shared-kubernetes-clusters-for-hybrid-and-multicloud-2o97</link>
      <guid>https://dev.to/kylekhunter/shared-kubernetes-clusters-for-hybrid-and-multicloud-2o97</guid>
      <description>&lt;p&gt;Now more than ever, hybrid and multicloud deployments are quickly becoming key enterprise requirements. As Kubernetes adoption in an enterprise grows, effectively managing multicluster deployments becomes increasingly critical to application delivery. To bring Kubernetes usage and hybrid/multicloud infrastructure together, IT organizations need a modern operating model for shared K8s clusters in hybrid and multicloud architectures.&lt;/p&gt;

&lt;p&gt;The impetus for choosing enterprise hybrid and multicloud deployment varies, but the challenges and opportunities remain regardless of an organization’s infrastructure journey. Whether purposefully undertaken as an IT strategy or as the result of prior infrastructure investment, many IT leaders are discovering the benefits of using more than one infrastructure approach simultaneously. Container orchestration, in many respects, is the next logical step. Managing Kubernetes in a hybrid and multicloud context, however, comes with unique challenges.&lt;/p&gt;

&lt;p&gt;This article outlines the different hybrid/multicloud approaches and the different kinds of workloads across clouds and data center environments. It explains how K8s is used for hybrid and multicloud environments, enabling operations across private and public clouds, and the challenges to consider when managing single-tenant environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Realities of the Hybrid and Multicloud Approaches
&lt;/h2&gt;

&lt;p&gt;Often, enterprise Kubernetes environments expand over time as new workloads and clusters are added, typically with different cloud services and Kubernetes distributions. On-premises workloads may already be running to maintain full compliance and regulatory control, while some customers may leverage past infrastructure in order to realize the financial benefits of depreciation. Managed Kubernetes services, such as Microsoft AKS and Amazon EKS, could be used to extend computing resources or take advantage of deeper integration with public cloud services.&lt;/p&gt;

&lt;p&gt;These requirements come together into a hybrid situation, where you may be using Kubernetes both on premises and in the cloud. To provision and manage clusters in different environments, many IT teams balance siloed environments, multiple consoles and the promise of agility behind cloud adoption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shared K8s Clusters in the Enterprise
&lt;/h2&gt;

&lt;p&gt;Enterprise Kubernetes environments need a multicluster management strategy that can grow and scale while also addressing the challenges posed by hybrid and multicloud infrastructure. A shared services platform (SSP) is an old concept but one that can be applied to Kubernetes. Doing so provides your organization with practical benefits — notably, a single management console that gives the IT organization greater visibility of the clusters. The enterprise platform team can pretest, blueprint and standardize platform services, security and policies, ensuring consistent configuration and reducing inconsistencies. This in turn improves developer productivity throughout the organization and enables faster go-to-market through self-service and by reducing time lost to errors, extra troubleshooting and downtime.&lt;/p&gt;

&lt;p&gt;In bringing Kubernetes into a hybrid cloud environment through an SSP for Kubernetes model, platform teams should consider these best practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Centralize control over Kubernetes clusters and workload configurations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Manage your clusters and workloads all in one place. Centralized deployment and management empower IT admins and standardize configurations across the platform. A central location is also helpful for regaining control over &lt;a href="https://thenewstack.io/how-to-bring-shadow-kubernetes-it-into-the-light/"&gt;shadow IT&lt;/a&gt; by returning management, security and governance back to the IT organization.&lt;/p&gt;

&lt;p&gt;With full visibility, the platform team can govern, isolate and monitor usage at any time from one console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provide Kubernetes self-service clusters and workload configurations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enable self-service access with preapproved configurations so developers can scale Kubernetes deployments. With downstream access to pipelines and defined workloads, DevOps can readily use self-service infrastructure and tooling for cluster and app deployment. This accelerates delivery and optimizes access to resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make ongoing security and compliance challenges easy to manage.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With a secured Kubernetes environment, IT organizations monitor and control identity and access with ease from a centralized location. Additionally, implementing &lt;a href="https://thenewstack.io/securing-access-to-kubernetes-environments-with-zero-trust/"&gt;zero-trust security&lt;/a&gt; simplifies access control.&lt;/p&gt;

&lt;p&gt;Taking the right approach to K8s management empowers DevOps and the IT organization to get the most out of bringing Kubernetes and hybrid/multicloud together. Over time, this strategy delivers greater business value through reduced operational overhead, centralized governance and policy management, and greater developer productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Bring Shared K8s Clusters Together with Hybrid and Multicloud
&lt;/h2&gt;

&lt;p&gt;Managing and operating Kubernetes in a hybrid environment can shift attention away from applications unless a robust shared services platform strategy is in place. To build this platform, IT leaders should focus on self-service, unified cluster lifecycle management, repeatable workflows and centralized, automated cluster and application provisioning.&lt;/p&gt;

&lt;p&gt;To realize the benefits of this transition:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leverage Team Expertise&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many organizations experience a Kubernetes skills gap, but what expertise the organization does have in building and maintaining custom software supply chains should be leveraged to shape the broader Kubernetes environment internally. The team should be equipped to effectively roll out unified management across multiple clusters, clouds and infrastructures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Establish Flexibility and Control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For scalable operations, flexibility and control are a must — centralizing the delivery of Kubernetes-related services makes standardized workflows, increased automation and optimized application delivery and support for multiple teams more feasible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enable Developer Self-Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Increasing development and operations team collaboration through efficient and repeatable DevOps workflows enables developers to focus on their code, not on underlying infrastructure. Multicluster, continuous deployment capabilities make it possible to increase efficiencies, implement best practices and protect against cluster inconsistencies.&lt;/p&gt;

&lt;p&gt;To do so, organizations are adopting a GitOps methodology — using Git tooling and workflows through ArgoCD, Flux, Rafay Systems or another tool or service. This reduces human error and allows developers to manage more clusters at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintain Centralized Security, Networking, Compliance and Cost Control
&lt;/h2&gt;

&lt;p&gt;Unless organizations can protect against shadow Kubernetes admins making divergent management, policy and operational decisions, IT teams will lose the benefits of a single platform. The platform team should maintain visibility and use centralization best practices like these to strengthen the SSP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplified identity and access control: Use a zero-trust environment with role-based access control (RBAC) to streamline secure access.&lt;/li&gt;
&lt;li&gt;Centralized monitoring and aggregation of metrics: Review cluster and app health, usage and metrics via a unified platform.&lt;/li&gt;
&lt;li&gt;Governance and fleet-wide policy management: Apply the same policies across clusters, workloads and resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Though a Kubernetes and hybrid/multicloud approach brings unique challenges from an operational and security perspective, centralized management and deployment can mitigate much of the risk this presents for your organization. An SSP for Kubernetes is an essential component of IT strategy, allowing platform teams to manage clusters and applications across all cloud and data center environments.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>kubernetes</category>
      <category>multicloud</category>
      <category>programming</category>
    </item>
    <item>
      <title>5 Practices for Kubernetes Operations with Amazon EKS</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Tue, 26 Jul 2022 17:20:24 +0000</pubDate>
      <link>https://dev.to/kylekhunter/5-practices-for-kubernetes-operations-with-amazon-eks-4456</link>
      <guid>https://dev.to/kylekhunter/5-practices-for-kubernetes-operations-with-amazon-eks-4456</guid>
      <description>&lt;p&gt;In the past several years, organizations of all sizes and verticals have helped to accelerate their IT development pipelines using containerized applications orchestrated by Kubernetes (K8s) and the cloud. But to achieve optimum efficiency, many of these organizations are looking to add other management services.&lt;/p&gt;

&lt;p&gt;One of the most popular choices for managed Kubernetes is &lt;a href="https://aws.amazon.com/eks/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;Amazon Elastic Kubernetes Service (EKS)&lt;/a&gt;. But as organizations expand adoption of Amazon EKS, the number of K8s clusters and apps can lead to significant operational challenges, including observability, upgrade management, security and developer productivity.&lt;/p&gt;

&lt;p&gt;To address these challenges, a platform/site reliability engineering (SRE) team must look for scalable ways to securely manage all their EKS clusters across all accounts and regions.&lt;/p&gt;

&lt;p&gt;Be it spot-based worker node provisioning, &lt;a href="https://aws.amazon.com/eks/eks-distro/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;Amazon EKS Distro (EKS-D)&lt;/a&gt; or secure application deployment across multiple EKS clusters configured with private endpoints, platform teams need to centralize management to create a holistic approach to operating Kubernetes clusters on AWS.&lt;/p&gt;

&lt;p&gt;This article covers ways teams can streamline the use of Amazon EKS and maximize the benefits of this robust Kubernetes management solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Filling the Operational Gap for Kubernetes
&lt;/h2&gt;

&lt;p&gt;Enterprises trying to scale modern applications often encounter an “operational gap” between what their Kubernetes strategy enables and what their organization needs to thrive.&lt;/p&gt;

&lt;p&gt;This operational gap is largely driven by three common factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster scale where standardization becomes increasingly complicated and challenging as the number of clusters grows.&lt;/li&gt;
&lt;li&gt;Cluster geography where a growing number of availability zones and AWS regions makes managing applications and infrastructure increasingly difficult.&lt;/li&gt;
&lt;li&gt;Ensuring proper access, as more people in the organization see the benefits of K8s and want to use it, configuring and maintaining access control by cluster becomes unscalable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For large enterprises using Kubernetes or a managed service like EKS, it is essential to enable the following capabilities to get the most out of the platform and help bridge these operational gaps. Let’s start by exploring these core areas and the important questions that come with them:&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation
&lt;/h2&gt;

&lt;p&gt;The key question to ask early on is this: “How can we streamline all the cluster and application deployments to keep up with the demands of the business?”&lt;/p&gt;

&lt;p&gt;Enterprises operating multiple clusters often run into a common challenge of managing the life cycle of fleet(s) in parallel. The key is to create operational practices for automating cluster and application deployments, Kubernetes upgrades and administrative tasks. This will reduce errors, increase productivity and deliver faster time to market for modern applications.&lt;/p&gt;

&lt;p&gt;First, enable the power of continuous deployment from a &lt;a href="https://rafay.co/platform/gitops-pipelines/?utm_source=thenewstack&amp;amp;utm_medium=website&amp;amp;utm_campaign=platform"&gt;GitOps operating model&lt;/a&gt; (implementing a version control system) to automatically deploy changes to Kubernetes clusters. Being able to create any number of pipelines consisting of multiple stages that can be executed sequentially one after another can help centralize every aspect of the process for managing both operations and development.&lt;/p&gt;

&lt;p&gt;Second, enable the simplest process for upgrading Kubernetes versions on Amazon EKS or Amazon EKS-D clusters, regardless of needing in-place upgrades or a migration to a new cluster. Focus on automating preflight checks, upgrades to the cluster and validating the changes faster to help simplify and standardize application lifecycle management. By automating mundane tasks, admins can lower the likelihood of human-caused errors, increase overall productivity and allow their teams to focus on innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;The next important question is “how can we secure all our clusters and applications across multiple AWS zones and regions to restrict usage to the right people, making sure all actions can be audited?” Most large IT organizations use identity management and access control for business applications, but creating and maintaining roles become crucial in multicluster environments where, for the sake of efficiency, a single AWS admin may be assigned to a group of clusters. This can create an inherent security risk if an attacker breaches a single account with access to all the clusters within.&lt;/p&gt;

&lt;p&gt;Consider increasing your security posture with a role-based access control and zero-trust access, which can be governed by policies and integrated with your corporate single sign-on solution. This helps to ensure that all applications require strong authentication and secure credentials and treats all network connections as untrustworthy unless proven otherwise.&lt;/p&gt;

&lt;p&gt;The goal is to allow the right users to access clusters from anywhere — even from behind firewalls — while maintaining a full audit trail by user and commands executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visibility
&lt;/h2&gt;

&lt;p&gt;One of the great things about Kubernetes is that it allows you to run applications across multiple regions, availability zones and clouds. To ensure resources are used effectively and managed across multiple accounts, clusters and AWS regions, platform/SRE teams need full visibility across their entire infrastructure, including on-premises and remote/edge locations, no matter which K8s distribution is employed.&lt;/p&gt;

&lt;p&gt;Understanding the status and health for every Amazon EKS and Amazon EKS-D cluster through a detailed, at-a-glance dashboard view is critical for production workloads. Having a single view of all clusters and apps makes it easier for cluster admins to visualize, diagnose and resolve incidents proactively and get the most out of Amazon EKS, especially as internal Kubernetes adoption increases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Governance
&lt;/h2&gt;

&lt;p&gt;Ensuring compliance with internal policies and industry regulations such as HIPAA, PC or GDPR is a fundamental requirement for a newly operational Kubernetes infrastructure. Generating automated workflows with standardized and approved templates for clusters and applications is critical.&lt;/p&gt;

&lt;p&gt;Consistency is key when governing the use of Kubernetes through policies, especially for elements such as security, storage and visibility across your entire K8s infrastructure. Ideally, different internal groups can use multiple sets of preapproved cluster configurations at different development stages. Doing so not only simplifies administration but helps to minimize the risk of mismanagement and vulnerabilities. This includes the ability to quickly detect, block and notify enterprise administrators of any changes within cluster and application configurations to eliminate out-of-bounds clusters and potential security and support issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Conceptually, Kubernetes allows for easier, faster and disposable clusters that internal and external clients can use cost-effectively. However, to benefit from these fast and flexible clusters, enterprises need to implement new processes to build, integrate, access, maintain and upgrade K8 clusters.&lt;/p&gt;

&lt;p&gt;This requires hiring K8s experts that are hard to find and keep because the demand for talent is high and supply is low. Having a centralized platform that reduces complexity and allows for streamlined operations becomes a key component for a successful deployment of large-scale Kubernetes environments.&lt;/p&gt;

&lt;p&gt;Kubernetes is becoming an increasingly popular choice for enterprises that want to empower their IT organizations to operate at velocity and scale. But as organizations scale their Kubernetes practice to thrive in the cloud with tools like Amazon EKS, deeper integration can successfully fill the operational gaps with Kubernetes and help you get the most out of your cloud investment.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Innate Risk of VMware Tanzu with Broadcom Acquisition</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Mon, 25 Jul 2022 20:02:32 +0000</pubDate>
      <link>https://dev.to/kylekhunter/the-innate-risk-of-vmware-tanzu-with-broadcom-acquisition-28i0</link>
      <guid>https://dev.to/kylekhunter/the-innate-risk-of-vmware-tanzu-with-broadcom-acquisition-28i0</guid>
      <description>&lt;p&gt;Have you heard the news of &lt;a href="https://investors.broadcom.com/news-releases/news-release-details/broadcom-acquire-vmware-approximately-61-billion-cash-and-stock"&gt;Broadcom’s acquisition of VMware&lt;/a&gt; for $61B? I personally have had flashbacks of &lt;a href="https://www.dell.com/en-us/dt/corporate/newsroom/announcements/2016/09/20160907-01.htm"&gt;Dell’s acquisition of EMC&lt;/a&gt; (which included VMware). Both of these were groundbreaking in terms of being one of the largest tech deals of all-time. Both were seemingly hardware companies attempting to become a software player.&lt;/p&gt;

&lt;p&gt;As Broadcom is navigating through another acquisition, it begs the question, can they integrate all of these disparate technologies successfully? History suggests that Broadcom has its work cut out for it to stay competitive with this merger.&lt;/p&gt;

&lt;p&gt;This blog will cover what analysts and survey respondents are saying about the acquisition, the market shift towards Kubernetes and cloud computing, and a path to successfully navigate enterprise modernization.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Analysts &amp;amp; Users are Saying&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://www.forrester.com/blogs/vmware-customers-get-ready-for-broadcom-disruption/"&gt;Forrester&lt;/a&gt;, &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“After all, VMware customers should be concerned especially if Broadcom follows the same playbook it used for its CA and Symantec acquisitions. Following these purchases, CA and Symantec customers saw massive price hikes, worsening support, and stalled development.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Broadcom does stand to benefit from the VMware acquisition as it looks to transform into a Software as a Service (SaaS) company. But this is potentially where many concerns are coming from. It is not an easy transition and takes time to evolve from perpetual to as-a-service licensing. And I’ve found that the friction of shifting both existing customers (and employees – particularly sales) is the most challenging transition.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://sso.451research.com/module.php/core/loginuserpass.php?AuthState=_55acd827cf6e19f06afe68aa435218945ebab523b1%3Ahttps%3A%2F%2Fsso.451research.com%2Fsaml2%2Fidp%2FSSOService.php%3Fspentityid%3Dpi_middleware.production%26RelayState%3Dhttps%253A%252F%252Faccess.451research.com%252Flogin-success%253Freturn_url%253D%252Freportaction%252F200630%252FToc%26cookieTime%3D1658779116"&gt;451 Research’s Voice of the Enterprise: Digital Pulse, Broadcom/VMware Acquisition Flash Survey 2022&lt;/a&gt; listed the following concerns by respondents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T9sgsJJI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mc3p8mgr70dw9rag5m7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T9sgsJJI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mc3p8mgr70dw9rag5m7.png" alt="Image description" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source, &lt;a href="https://www.theregister.com/2022/06/09/gartner_broadcom_vmware_advice/"&gt;The Register&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are we at the final frontier for VMware? Truth is no one knows what Broadcom’s portfolio will look like in years to come but one thing seems evident, both analysts and VMware customers have expressed concerning and sometimes negative sentiments regarding the acquisition listing slowed product innovation, talent exodus, rising licensing costs, and level of support as the biggest risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;VMware and Kubernetes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;VMware changed the way we leverage compute infrastructure by abstracting the concept of a server away from the limitations of physical server hardware – optimizing the efficiency and utilization of each server and driving down the cost of compute. With VMware vCenter, virtual machines (VMs) could be deployed, scaled, moved, and managed.&lt;/p&gt;

&lt;p&gt;As adoption of cloud computing continues to steadily rise, containers have become the way cloud-native applications are packaged, distributed, deployed, and managed. While many organizations are migrating VM-based applications to containers, VMs will be pervasive for some time.&lt;/p&gt;

&lt;p&gt;VMware’s solution to Kubernetes is its Tanzu portfolio of products and services. According to VMware, “Tanzu helps you build new apps, modernize existing ones, and evolve your software development process around cloud native technologies, patterns, and architectures.”&lt;/p&gt;

&lt;p&gt;The combination of Broadcom and VMware could end up being a very powerful force in the industry. But today’s sentiment is more doubtful than optimistic. Using the same Forrester reference above, “A combined Broadcom and VMware could create a behemoth that holistically tackles any workload modernization challenge, thereby delivering the greater goal for many enterprise customers: to embrace cloud-native without overdependency on any cloud provider. Will it? Given its track record, it does not seem likely. Ultimately, if you’re a VMware shop, you’ve got to make the call in the near future.”&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A New Path to Follow&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Many organizations are diversifying and extending their investment from VMs to containers. By centralizing management and operations for both VMs and Kubernetes, Rafay provides the ability to accelerate the migration of legacy apps to a cloud-native architecture – without being required to update legacy apps that use VMs. Learn more about &lt;a href="https://rafay.co/press-release/rafay-systems-launches-unified-kubernetes-operations-support-for-converged-infrastructure/"&gt;how Rafay supports converged infrastructure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://www.esg-global.com/blog/developer-impact-of-the-broadcom-merger-with-vmware"&gt;Paul Nashawaty from ESG&lt;/a&gt;, a recent Enterprise Technology survey showed VMware Tanzu’s share of VMware’s business went from 49% in April 2021 to 32% in January 2022. And by April 2022, that number fell to just 20%. With the uncertainty and risk in mind, now is the time to start rethinking your VMware Tanzu strategy.&lt;/p&gt;

&lt;p&gt;If you’re considering moving away from VMware altogether, keep in mind that you don’t have to use another hypervisor. Rafay eliminates the VMware hypervisor and orchestrates both VMs and containers. Learn more about how &lt;a href="https://rafay.co/press-release/rafay-systems-announces-streamlined-operations-for-virtual-machine-based-applications-on-kubernetes/"&gt;Rafay streamlines VM and containerized applications&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ready to find out why so many enterprises and platform teams have partnered with Rafay for their cloud modernization initiatives? Learn why &lt;a href="https://rafay.co/why-rafay/why-vmware-tanzu-users-are-switching-to-rafay/"&gt;Tanzu customers are switching to Rafay&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>news</category>
    </item>
    <item>
      <title>Kubernetes Monitoring with Prometheus</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Wed, 29 Jun 2022 19:02:32 +0000</pubDate>
      <link>https://dev.to/kylekhunter/kubernetes-monitoring-with-prometheus-3jdn</link>
      <guid>https://dev.to/kylekhunter/kubernetes-monitoring-with-prometheus-3jdn</guid>
      <description>&lt;p&gt;Kubernetes monitoring is the process of gathering metrics from the Kubernetes clusters you operate to identify critical events and ensure that all hardware, software, and applications are operating as expected. Monitoring is essential to provide insight into cluster health, resource consumption, and workload performance. With the right monitoring, errors that occur in any layer of the stack can be quickly identified and corrected.&lt;/p&gt;

&lt;p&gt;There are many Kubernetes monitoring tools, including open-source tools like Prometheus and the &lt;a href="https://www.elastic.co/what-is/elk-stack"&gt;ELK Stack&lt;/a&gt; as well as commercial tools including &lt;a href="https://www.datadoghq.com/"&gt;Datadog&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudwatch/"&gt;Cloudwatch&lt;/a&gt;, and &lt;a href="https://newrelic.com/"&gt;New Relic&lt;/a&gt;. (You can learn more about other Kubernetes monitoring tools in this &lt;a href="https://rafay.co/the-kubernetes-current/best-practices-tools-and-approaches-for-kubernetes-monitoring/"&gt;recent Rafay blog&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Of the open-source Kubernetes monitoring tools, Prometheus is among the most popular and widely used. This blog discusses the use of Prometheus to monitor Kubernetes and Kubernetes applications. It also describes how Rafay incorporates Prometheus to address the monitoring challenges that emerge as you move from managing a handful of Kubernetes clusters to managing a Kubernetes fleet.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Prometheus?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://prometheus.io/docs/introduction/overview/"&gt;Prometheus&lt;/a&gt; is an open-source event monitoring and alerting tool that was originally developed at SoundCloud starting in 2012, inspired by the Borgmon tool used at Google. Prometheus has been a Cloud Native Computing Foundation (CNCF) project since 2016; it was the second hosted project after Kubernetes. While this blog discusses Prometheus in the context of Kubernetes monitoring, it can satisfy a wide variety of monitoring needs.&lt;/p&gt;

&lt;p&gt;Prometheus collects and stores the metrics you specify as time series data. Metrics can be analyzed to understand the operational state of your cluster and its components.&lt;/p&gt;

&lt;p&gt;An important focus of Prometheus is reliability. This helps ensure that Prometheus remains accessible if other things are misbehaving in your environment. Each Prometheus server is stand alone. A local time series database makes it independent from remote storage or other remote services. This makes it useful for rapidly identifying issues and receiving real-time feedback on system performance for the clusters and apps being monitored.&lt;/p&gt;

&lt;p&gt;The main components of Prometheus, including the Prometheus server and the Alertmanager, are shown in the figure below. Prometheus also provides a Pushgateway, which allows short-lived and batch jobs to be monitored. The Prometheus client library supports instrumenting application code. A powerful query language (PromQL) makes it possible to easily query Prometheus and drill down to understand what’s happening. While Prometheus offers a web UI, it is often used in combination with &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt; for more flexible visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XA4pqggV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4dstwhs8lb6463sipvr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XA4pqggV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4dstwhs8lb6463sipvr.jpg" alt="Image description" width="880" height="461"&gt;&lt;/a&gt;&lt;br&gt;
Source: &lt;a href="https://prometheus.io/docs/introduction/overview/"&gt;Prometheus.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the things that contributes to the popularity of Prometheus is that many integrations exist, including integrations with various languages, databases, and other monitoring and logging tools. This gives you the flexibility to continue to use the tools and skills you already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Planning a Prometheus Deployment&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A successful Prometheus deployment requires some up-front planning. First, it’s critical to keep track of who is accessing your clusters and what they are doing so changes can be monitored and rolled back if necessary. You also need to carefully consider what cluster and application metrics you need to collect to help you identify and remediate issues, and what additional visualization tools (if any) you will use to make sense of the data you collect.&lt;/p&gt;

&lt;p&gt;Prometheus uses storage efficiently but gathering metrics that don’t add value will consume storage and cost you money. As your deployments become multi-cluster and multi-cloud, it becomes important to balance the value of metrics retained against storage costs. As noted above, Prometheus likes to store metrics locally. Consider and budget for remote storage for longer term retention if needed.&lt;/p&gt;

&lt;p&gt;If you’re going to use Prometheus to monitor in-house Kubernetes applications, you will likely need to develop one or more agents to provide the proper instrumentation. Make sure the output from the agent makes sense to the people who will receive the alerts.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prometheus Challenges with Large Kubernetes Fleets&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The standalone design of Prometheus introduces a certain amount of complexity, especially as your Kubernetes fleet grows to include many clusters—potentially running different Kubernetes distributions in different cloud environments. A large operation with many clusters can easily exceed the capabilities of a single Prometheus server and its associated storage. That means you must either reduce the number of metrics you’re collecting or scale the number of Prometheus servers.&lt;/p&gt;

&lt;p&gt;There are several ways to scale your Prometheus backend. Prometheus servers have the ability to scrape data from other Prometheus servers, so you can federate servers. Prometheus supports either a hierarchical or federated model. This is well described in this recent blog. These approaches require careful planning and add complexity, especially as your operations continue to scale.&lt;/p&gt;

&lt;p&gt;Prometheus also provides a way to integrate with remote storage locations through an API that allows writing and reading metrics using a remote URL. This enables you to get all your data in one place, but you’ll need additional tooling to take advantage of that aggregated data. Many organizations add Thanos or Cortex to their toolsets to aggregate data and provide long-term storage and a global view.&lt;/p&gt;

&lt;p&gt;While these hurdles aren’t insurmountable, it’s important to think about the additional planning and ongoing management that will be required. Because of the complexity of monitoring large Kubernetes environments, many organizations prefer monitoring as a service.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Visibility and Monitoring at Rafay&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://rafay.co/platform/visibility-monitoring-service/"&gt;Rafay’s Visibility and Monitoring Service&lt;/a&gt; is a cloud-based service that unifies monitoring, alerting, and visualization for all your Kubernetes clusters and applications, reducing mean time to recovery (MTTR) by up to 60%.&lt;/p&gt;

&lt;p&gt;The service works by deploying Prometheus automatically to each of your clusters via the Rafay controller. Metrics from each of your clusters are cached locally and automatically scraped to a centralized time series database that aggregates data across all your clusters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--otC93U1g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfgs66yw5te6wn3m93l8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--otC93U1g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfgs66yw5te6wn3m93l8.jpg" alt="Image description" width="880" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rafay dashboards let you visualize Kubernetes metrics and events gathered, including resources consumed, user and access activity, critical alerts, and the overall health of every cluster and application deployed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L8zo6X0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9whmh166vwxowggfcjud.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L8zo6X0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9whmh166vwxowggfcjud.jpg" alt="Image description" width="880" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Customers that are already operating a custom monitoring stack using Prometheus can use Rafay to standardize the configuration, deployment, and lifecycle management of a Prometheus-Operator-based cluster monitoring stack across your fleet of clusters that can be used independently from Rafay monitoring.&lt;/p&gt;

&lt;p&gt;Rafay also integrates with a variety of popular management tools and services including &lt;a href="https://docs.rafay.co/recipes/monitoring/amazon-prometheus/overview/"&gt;Amazon Prometheus&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/cloudwatch/cloudwatch/"&gt;CloudWatch&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/datadog/"&gt;Datadog&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/grafana/"&gt;Grafana&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/newrelic/"&gt;New Relic&lt;/a&gt;, and &lt;a href="https://docs.rafay.co/recipes/monitoring/splunk/"&gt;Splunk&lt;/a&gt;. If you utilize or plan to use these tools, Rafay can standardize the deployment and configuration of the necessary components across all your clusters.&lt;/p&gt;

&lt;p&gt;Rafay’s &lt;a href="https://rafay.co/platform/kubernetes-operations-platform/"&gt;Kubernetes Operations Platform&lt;/a&gt; delivers the visibility, monitoring, and other capabilities you need to ensure the success of your multi-cloud, multi-cluster Kubernetes environment. To discover how Rafay can help you standardize visibility and monitoring across your entire fleet of Kubernetes clusters, take a closer look at Rafay’s &lt;a href="https://rafay.co/platform/visibility-monitoring-service/"&gt;Visibility and Monitoring Service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ready to find out why so many enterprises and platform teams have partnered with Rafay to streamline Kubernetes monitoring and operations? &lt;a href="https://rafay.co/start/"&gt;Sign up for a free trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>programming</category>
    </item>
    <item>
      <title>Choosing the Best Kubernetes Cluster and Application Deployment Strategies</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Wed, 08 Jun 2022 17:40:28 +0000</pubDate>
      <link>https://dev.to/kylekhunter/choosing-the-best-kubernetes-cluster-and-application-deployment-strategies-28n8</link>
      <guid>https://dev.to/kylekhunter/choosing-the-best-kubernetes-cluster-and-application-deployment-strategies-28n8</guid>
      <description>&lt;p&gt;As your Kubernetes environment grows into a multi-cluster, multi-cloud fleet, cluster and workload deployment challenges increase exponentially. It becomes critical to streamline, automate, and standardize operations to avoid having to revisit decisions or perform the same, error-prone manual tasks over and over again.&lt;/p&gt;

&lt;p&gt;Using the right deployment tools to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy cluster infrastructure&lt;/li&gt;
&lt;li&gt;Install and configure Kubernetes and associated add-on software&lt;/li&gt;
&lt;li&gt;Deploy and update application workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;will reduce manual effort and the need for specific expertise, while delivering more consistent results across environments and greater stability. The right tools are essential for creating a &lt;a href="https://rafay.co/the-kubernetes-current/what-is-a-shared-services-platform-for-kubernetes/"&gt;shared services platform&lt;/a&gt; in which Dev, QA, Ops, and other teams are able to consume and release infrastructure, cluster resources, and apps quickly and easily.&lt;/p&gt;

&lt;p&gt;This blog explores the challenges at the infrastructure, Kubernetes, and application workload levels along with guidelines for choosing tools that will streamline your operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Configuring Infrastructure for Kubernetes Deployments&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The expertise required to build a Kubernetes cluster is in short supply in many organizations. You may have the skills to build clusters in your data center or in Amazon Web Services (AWS), for example, but what happens when you expand your K8s operations into GCP or Microsoft Azure? Your Kubernetes deployment tools should enable you to easily deploy infrastructure and apps anywhere — from the datacenter to public clouds to the edge — with standardized configurations that meet all your requirements.&lt;/p&gt;

&lt;p&gt;It is especially important to choose the right strategy to make your infrastructure reliable during an application deployment or update. There are a variety of “cluster template” approaches for Kubernetes that help solve infrastructure challenges. A template defines what your cluster infrastructure looks like and automatically provisions that infrastructure. Although some solutions may use proprietary technologies like Ansible, cluster templates are often based on open-source technology such as Helm charts or Terraform.&lt;/p&gt;

&lt;p&gt;If you’re looking at Kubernetes management solutions that support cluster templates, there are several guidelines to keep in mind. Make sure the solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Works in the environments you plan to operate in&lt;/li&gt;
&lt;li&gt;Enables you to enforce specific guidelines and policies&lt;/li&gt;
&lt;li&gt;Enables templates to be easily created by your Platform team&lt;/li&gt;
&lt;li&gt;Enables templates to be easily consumed by your Dev, QA, and Ops users&lt;/li&gt;
&lt;li&gt;Detects &amp;amp; notifies of cluster drift in the wild, across infrastructures&lt;/li&gt;
&lt;li&gt;Is compatible with any infrastructure automation tools you already use&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Installing and Configuring Kubernetes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes has a reputation for being complex to deploy and operate. As your K8s environment grows, automation can help you simplify and standardize K8s deployments and maintenance so that users can configure new clusters on demand — without ignoring important policies and other guidelines. For instance, you may want all Kubernetes clusters to include a specific service mesh, ingress controller, or a monitoring tool such as Prometheus. With the right automation, add-ons like these can be consistently deployed.&lt;/p&gt;

&lt;p&gt;There’s no lack of tools for deploying and configuring Kubernetes. Every packaged Kubernetes distribution includes some form of installer. The same goes for popular managed Kubernetes services from AWS, Microsoft Azure, Google Cloud, and others.&lt;/p&gt;

&lt;p&gt;But you probably already see the problem — assuming you haven’t experienced it first-hand. Having different tools with different capabilities and interfaces for each environment quickly becomes unsustainable from an operational standpoint. Many organizations end up with siloed teams for each infrastructure or environment.&lt;/p&gt;

&lt;p&gt;A variety of management services and open-source tools are emerging that address these problems. Well-known open source tools include &lt;a href="https://github.com/kubernetes/kops"&gt;kOps&lt;/a&gt; and &lt;a href="https://github.com/kubernetes-sigs/kubespray"&gt;kubespray&lt;/a&gt;, both developed under the auspices of Kubernetes special interest groups (SIGs). There are also a number of SaaS and hosted services. (See the blog, &lt;a href="https://rafay.co/the-kubernetes-current/how-a-hosted-software-delivery-model-differs-from-saas-for-kubernetes-management-and-operations/"&gt;How a Hosted Software Delivery Model Differs from SaaS for Kubernetes Management and Operations&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;If you’re evaluating tools or services to address Kubernetes installation and lifecycle management needs, there are several guidelines to keep in mind. Make sure the solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Works in the environments you use (clouds, virtual, physical)&lt;/li&gt;
&lt;li&gt;Enables you to specify uniform security policies&lt;/li&gt;
&lt;li&gt;Lets you automatically install Kubernetes add-ons&lt;/li&gt;
&lt;li&gt;Provides flexibility to accommodate unique requirements on a per-environment, per-location, or per-cluster basis&lt;/li&gt;
&lt;li&gt;Offers compatibility with any automation tools you already use&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Deploying Kubernetes Applications&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The whole purpose of building clusters and deploying Kubernetes is to allow application workloads to be developed, tested, and deployed into production efficiently. However, Kubernetes only provides the foundation.&lt;/p&gt;

&lt;p&gt;A lot of additional time and effort is required to create and maintain continuous integration/continuous delivery (CI/CD) pipelines to support software creation and deployment. CI tools such as Jenkins, CircleCI, GitLab, and Azure DevOps and CD tools such as Argo and Flux plus GitOps are commonly used in Kubernetes environments. Your organization may be using several of these tools already.&lt;/p&gt;

&lt;p&gt;(To learn more about GitOps, read the blog &lt;a href="https://rafay.co/the-kubernetes-current/gitops-principles-and-workflows-every-team-should-know/"&gt;GitOps Principles and Workflows Every Team Should Know&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Each application workload typically needs to be deployed for dev, staging, and production — often with specific customizations for each environment. Even with the best tools, that requires separate pipelines — and unique application configuration files for each pipeline — adding complexity and manual effort. While it may be possible to write a script to generate custom configuration for each case, that’s one more unique solution to be managed and maintained. For production deployment, you may also need to deploy on dozens of clusters in different environments using blue-green, canary, or some other deployment strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Kubernetes Cluster and Application Deployment at Rafay&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Rafay’s &lt;a href="https://rafay.co/platform/kubernetes-operations-platform/"&gt;Kubernetes Operations Platform&lt;/a&gt; (KOP) includes capabilities to streamline infrastructure, Kubernetes, and application deployments, addressing all the challenges discussed in this blog.&lt;/p&gt;

&lt;p&gt;At Rafay we’ve codified Kubernetes best practices in order to &lt;a href="https://rafay.co/the-kubernetes-current/four-pillars-of-kubernetes-fleet-management/"&gt;streamline management of large K8s fleets&lt;/a&gt;. KOP simplifies cluster and workload deployments in data centers, public clouds, and at the edge with easy-to-use tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rafay Cluster Templates enable your team to quickly define infrastructure specifications that can be consumed by users to create clusters for testing or other purposes, enabling many routine cluster operations to be handled via self-service, while ensuring users don’t need a lot of detailed knowledge about the target environment. New clusters automatically adhere to the rules and restrictions you specify.&lt;/li&gt;
&lt;li&gt;Rafay Cluster Blueprints allow you to define and standardize key elements of a Kubernetes configuration — including add-ons and security policies — to ensure consistency and repeatability. It also notifies and optionally blocks changes in production.&lt;/li&gt;
&lt;li&gt;Rafay Workloads, Workload templates, and GitOps pipelines take the complexity out of application deployments. Pipelines support dev, staging, and production deployment needs, using workload templates to avoid the need for custom application manifests for each environment. You can quickly implement canary, blue-green, and other deployment models.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep an eye out for an upcoming Rafay white paper that will explore all of these capabilities in more detail.&lt;/p&gt;

&lt;p&gt;Ready to find out why so many enterprises and platform teams have partnered with Rafay for Kubernetes fleet management? &lt;a href="https://rafay.co/start/"&gt;Sign up for a free trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Top Six Kubernetes Best Practices for Fleet Management</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Tue, 17 May 2022 23:30:54 +0000</pubDate>
      <link>https://dev.to/kylekhunter/top-six-kubernetes-best-practices-for-fleet-management-1i0k</link>
      <guid>https://dev.to/kylekhunter/top-six-kubernetes-best-practices-for-fleet-management-1i0k</guid>
      <description>&lt;p&gt;Provisioning a Kubernetes cluster is relatively easy. However, each new cluster is the beginning of a very long journey, and every cluster you add to your &lt;a href="https://rafay.co/the-kubernetes-current/four-pillars-of-kubernetes-fleet-management/"&gt;Kubernetes fleet&lt;/a&gt; increases management complexity. In addition, many enterprises struggle to keep up with a rapidly growing number of Kubernetes clusters spread across on-prem, cloud, and edge locations — often with diverse Kubernetes configs and using different tools in different environments.&lt;/p&gt;

&lt;p&gt;Fortunately, there are a number of K8s best practices that will help you rein in the chaos, increase your Kubernetes success, and prepare you to cope with fast-growing and dynamic Kubernetes requirements. This blog describes six strategic Kubernetes best practices that will put you on the path to successfully managing a fleet.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practice 1: Think Hybrid and Multi-Cloud&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The pandemic has been a reality check for organizations of all sizes, establishing the value of cloud computing and cloud-native development and accelerating their adoption.&lt;/p&gt;

&lt;p&gt;Because Kubernetes workloads can be highly portable, you have the option to deploy workloads in any cloud to deliver an optimal experience for your customers and employees — where optimal may mean the best performance with the lowest network latency or the ability to leverage a differentiated service native to a particular cloud.&lt;/p&gt;

&lt;p&gt;While hybrid and multi-cloud Kubernetes cluster deployments provide advantages to your business and have become a best practice, they increase the complexity of your Kubernetes fleet. However, the right SaaS tools offer significant advantages in hybrid and multi-cloud environments, enabling you to operate across multiple public clouds and data center environments with less friction while allowing a greater level of standardization.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practice 2: Emphasize Automation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Managing Kubernetes with kubectl commands and a few scripts is not too difficult when you only have a few clusters, but it simply doesn’t scale. Automating and standardizing common cluster and application operations allows you to manage more clusters with less effort while avoiding misconfigurations due to human errors. For this reason, automation is considered a best practice for gaining control of your Kubernetes fleet.&lt;/p&gt;

&lt;p&gt;Many organizations are adopting GitOps, bringing the familiar capabilities of Git tools to infrastructure management and continuous deployment (CD). In last year’s &lt;a href="https://aws.amazon.com/blogs/containers/results-of-the-2020-aws-container-security-survey/"&gt;AWS Container Security Survey&lt;/a&gt;, 64.5% of the respondents indicated they were using GitOps already.&lt;/p&gt;

&lt;p&gt;With GitOps, when changes are made to a Git repository, code is pushed to (or rolled back from) the production infrastructure, thus automating deployments quickly and reliably.GitOps was the subject of the recent Rafay Blog, &lt;a href="https://rafay.co/the-kubernetes-current/gitops-principles-and-workflows-every-team-should-know/"&gt;GitOps Principles and Workflows Every Team Should Know&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But GitOps is not the only tool to increase Kubernetes automation. Some Kubernetes management tools utilize &lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent&lt;/a&gt; (OPA), a general-purpose policy engine used to enforce policies in microservices, Kubernetes, CI/CD pipelines, API gateways, etc. OPA can be used to enable policy-based management across your entire K8s fleet. See &lt;a href="https://rafay.co/the-kubernetes-current/managing-policies-on-kubernetes-using-opa-gatekeeper/"&gt;Managing Policies on Kubernetes using OPA Gatekeeper&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practice 3: “Zero” In on Security&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Security for your Kubernetes fleet should never be an afterthought. Mission-critical clusters and applications running in production require the highest level of security and control. In addition, as your fleet grows, your enterprise may be exposed to new security risks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rafay.co/the-kubernetes-current/securing-kubernetes-applying-zero-trust-principles-to-your-kubernetes-environment/"&gt;Applying zero-trust principles&lt;/a&gt; is the best practice for securing your K8s environment. Kubernetes includes all the hooks necessary for zero-trust. Unfortunately, keeping all the individual elements correctly configured and aligned across dozens of clusters is a big challenge.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practice 4: Maximize Monitoring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are a variety of open source solutions for Kubernetes monitoring. However, just like with everything else, monitoring and visibility become more challenging as the number of clusters and different cloud environments increase.&lt;/p&gt;

&lt;p&gt;The best practice is to provide centralized logging with a base level of monitoring, alerting, and visualization across your Kubernetes fleet. Many organizations implement this on their own using open source tools such as Prometheus and Grafana.&lt;/p&gt;

&lt;p&gt;However, in keeping with the previous best practice, there are SaaS services that will do the heavy lifting for you, providing everything you need in one place with uniform tools across diverse environments. You can learn more about visibility and monitoring in the recent blog, &lt;a href="https://rafay.co/the-kubernetes-current/best-practices-tools-and-approaches-for-kubernetes-monitoring/"&gt;Best Practices, Tools, and Approaches for Kubernetes Monitoring&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practice 5: Opt for Software as a Service&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A complete Kubernetes environment has a lot of moving parts. Once you add developer tools, management tools, monitoring, security services, etc., it’s a significant and ongoing investment in time and energy; Substantial skill may be needed to keep all the tools up-to-date and operating as expected. Therefore, as your environment scales, it’s best to let services take the place of software you have to install and manage yourself wherever possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managed Kubernetes&lt;/strong&gt;&lt;br&gt;
For example, most enterprises make use of managed Kubernetes services from public cloud providers such as AWS (EKS), Azure (AKS), and Google Cloud (GKE) to simplify cluster deployment, position applications closer to customers, and provide the ability to dynamically scale up to address peak loads without requiring a lot of CapEx. &lt;a href="https://www.cncf.io/reports/cncf-annual-survey-2021/"&gt;A recently released study from the CNCF&lt;/a&gt; found that 79% of respondents use public cloud Kubernetes services. Most public clouds also offer various related services that are easy to consume, complement your Kubernetes efforts, and accelerate development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SaaS tools&lt;/strong&gt;&lt;br&gt;
In addition, there are an increasing number of software-as-a-service (SaaS) and hosted solutions that provide Kubernetes management, monitoring, security, and other capabilities. The SaaS model, in particular, provides fast time to value, robustness and reliability, flexible pricing, and ease of use.&lt;/p&gt;

&lt;p&gt;Choosing SaaS tools to address business and operational needs can enable you to reduce reliance on hard-to-find technical experts. It’s also worth noting that, although many of us use the terms “hosted” and “SaaS” somewhat interchangeably, &lt;a href="https://rafay.co/the-kubernetes-current/how-a-hosted-software-delivery-model-differs-from-saas-for-kubernetes-management-and-operations/"&gt;they are not the same thing&lt;/a&gt;. Choose wisely.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practice 6: DI-Why?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes has a reputation for being challenging to deploy and operate. When your organization was just getting started with Kubernetes, it may have made sense to do it yourself (DIY) — building dedicated infrastructure for Kubernetes, compiling upstream code, and developing your internal tools — but as Kubernetes becomes more and more critical to your production operations, it can no longer be treated as a science project. So why make Kubernetes harder than it has to be?&lt;/p&gt;

&lt;p&gt;An entire ecosystem of services, support, and tools is arising around Kubernetes to help simplify everything from deployment to development to operations. The right services, tools, and partners will allow you to accomplish more — with much less toil. Continuing to roll your Kubernetes is a waste of developer and operations time and talent that could be spent adding more value elsewhere in your business.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Kubernetes Best Practices at Rafay&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At Rafay we’ve built our entire platform around codifying Kubernetes best practices in order to &lt;a href="https://rafay.co/the-kubernetes-current/four-pillars-of-kubernetes-fleet-management/"&gt;streamline management of large K8s fleets&lt;/a&gt;. Our &lt;a href="https://rafay.co/platform/kubernetes-operations-platform/"&gt;Kubernetes Operations Platform&lt;/a&gt; (KOP) unifies lifecycle management for both clusters and containerized applications, incorporating all of the best practices discussed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software as a Service:&lt;/strong&gt; Rafay’s SaaS-first approach enables your organization to gain efficiencies from Kubernetes almost immediately, speeding digital transformation initiatives while keeping operating costs low.&lt;br&gt;
&lt;strong&gt;Hybrid and multi-cloud:&lt;/strong&gt; Rafay KOP works across cloud, data center, and edge environments, allowing you to easily deploy and operate workloads wherever needed.&lt;br&gt;
&lt;strong&gt;Monitoring and Visibility:&lt;/strong&gt; &lt;a href="https://rafay.co/platform/visibility-monitoring-service/"&gt;Rafay’s Visibility and Monitoring Service&lt;/a&gt; makes it simple to visualize, monitor, and manage the health of your clusters and applications.&lt;br&gt;
&lt;strong&gt;Automation:&lt;/strong&gt; Rafay simplifies automation with our &lt;a href="https://rafay.co/platform/gitops-service/"&gt;GitOps Service&lt;/a&gt;. Our &lt;a href="https://rafay.co/platform/kubernetes-multi-cluster-management-service/"&gt;Multi-Cluster Management Service&lt;/a&gt; incorporates cluster templates, cluster blueprints, and workload templates — ensuring adherence to Kubernetes deployment best practices — as well as our Kubernetes Policy Management Service utilizing OPA.&lt;br&gt;
&lt;strong&gt;Security:&lt;/strong&gt; &lt;a href="https://rafay.co/platform/zero-trust-access-service/"&gt;Rafay’s Zero-Trust Access Service&lt;/a&gt; centralizes access control for your entire fleet with automated RBAC. It ensures that Kubernetes security best practices are applied and maintained in multi-cluster, multi-cloud deployments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--svOR_xlU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/stim9sn6hz9sr5r9k29v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--svOR_xlU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/stim9sn6hz9sr5r9k29v.png" alt="Image description" width="880" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rafay delivers the fleet management capabilities you need to ensure the success of your Kubernetes environment, helping you rationalize and standardize management across your entire fleet of K8s clusters and applications.&lt;/p&gt;

&lt;p&gt;Ready to find out why so many enterprises and platform teams have partnered with Rafay for Kubernetes fleet management? &lt;a href="https://rafay.co/start/"&gt;Sign up for a free trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Multi-cluster Kubernetes Management and Access</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Mon, 18 Apr 2022 20:27:42 +0000</pubDate>
      <link>https://dev.to/kylekhunter/multi-cluster-kubernetes-management-and-access-53kf</link>
      <guid>https://dev.to/kylekhunter/multi-cluster-kubernetes-management-and-access-53kf</guid>
      <description>&lt;p&gt;As cloud and Kubernetes have become a standard, security remains one of the top inhibitors to modern application development. To reduce security risks, organizations can’t manage access control on a cluster-by-cluster basis. And not finding a scalable approach leads to misconfigurations, vulnerabilities, and failed compliance audits.&lt;/p&gt;

&lt;p&gt;Let us travel back in time and picture a fort. Forts were huge with massively thick walls, doors, watch towers and a moat to protect them from attacks. There were several layers of defense to keep attackers at bay. An attacker might swim across the moat but still had to climb the high walls before entering the fort. Thus, an attacker might compromise a single layer, but having several layers makes it difficult for an attacker to enter the fort.&lt;/p&gt;

&lt;p&gt;If you observe closely, all the layers of defense did one thing – prevented access to attackers. That’s exactly what you need to protect your applications – several layers of defense that prevent unauthorized access. When it comes to Kubernetes access control, there are many different components to manage. Kubernetes clusters are complex and dynamic in nature which makes them vulnerable and prone to attacks.&lt;/p&gt;

&lt;p&gt;This blog explores fundamental considerations when managing access to multiple Kubernetes clusters, which should help you plan better for overall Kubernetes security.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Isolating Your Kubernetes API Server&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In a Kubernetes cluster, the control plane controls nodes, nodes control pods, pods control containers, and containers control applications. But what controls the control plane? Kubernetes exposes APIs that let you configure the entire Kubernetes cluster, so securing access to the Kubernetes API is one of the most critical considerations when it comes to Kubernetes security. With Kubernetes being entirely API-driven, controlling and limiting who can access clusters and what actions they are allowed to perform is the first line of defense.&lt;/p&gt;

&lt;p&gt;Let’s examine the three steps of Kubernetes access control. Ensuring that network access control and TLS connections are appropriately configured should be your first priority before the authentication process starts.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. API Authentication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The first step to access control is authenticating a request. Using an external authentication service when possible is recommended. For example, if your organization already manages user accounts using corporate Identity Providers (IdPs), such as Okta, GSuite, and Azure AD to authenticate users. The Kubernetes API server does not guarantee the order authenticators run in so it’s important to ensure that users are only tied to a single authentication method. It’s also important to perform periodic reviews of previously used auth methods and tokens and decommission them if they’re no longer being used.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. API Authorization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once authenticated, Kubernetes checks if a request is authorized. Role-based access control (RBAC) is the preferred way to authorize API access. There are four built-in Kubernetes roles by default that you should be aware of – cluster-admin, admin, edit, and view. ClusterRoles can be used to set permissions for cluster resources (eg, nodes), whereas roles can be used for namespace resources (eg, pods). RBAC in Kubernetes comes with a certain amount of complexity and manual effort. More on RBAC in the next section.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Admission Control&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After successful authentication and authorization to perform specific tasks, the final step is admission control to modify or validate requests. Kubernetes ships several modules to help you define and customize what is allowed to run on your cluster, such as resource request limits and enforce pod security policies. Admission controllers can also be used to expand the Kubernetes API server via webhooks for advanced security such as implementing image scanning.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Role-based Access Control (RBAC)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the reasons Kubernetes is adopted at such a large scale is because of the thriving community and regular updates. One of the key updates that was introduced in Kubernetes 1.6 was Role Based Access Control or RBAC. While the basic authentication and authorization is taken care of by RBAC, the creation and maintenance of roles becomes crucial in multiple cluster environments. If you grant the built-in cluster admin role to any user, they can virtually do anything in the cluster. Managing and keeping track of roles and access is a challenge.&lt;/p&gt;

&lt;p&gt;For organizations with large, multi-cluster environments, there are a lot of resources that are created and deleted, often increasing risk of having unused or missing roles left unattended. Some inactive role bindings can unexpectedly grant privileges in the future when new roles are created. This happens because role bindings can refer to roles that don’t exist anymore. In the future if the same role name is used, these unused role bindings can grant privileges that weren’t supposed to be in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Complex and Dynamic Nature of Clusters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As the number of clusters, roles, and users increase, ensuring control requires proper visibility of users, groups, roles, and permissions. With every new role addition, you will have additional rules to configure. With large organizations, this can mean hundreds and even thousands of rules to manage. And with lack of a centralized system in place to manage all the roles across clusters, it’s an administrator’s worst nightmare.&lt;/p&gt;

&lt;p&gt;One of the reasons Kubernetes is popular is because it is inherently scalable. It comes with security tools out of the box that allow both application and the infrastructure to scale based on demand. This means that Kubernetes clusters can be short-lived and be created and destroyed instantly. Every time a cluster is created or destroyed access must be configured for specific users. If access to clusters is not managed properly this can give rise to security vulnerabilities, potentially granting unauthorized access to your entire cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most of the teams today are spread across different business units within an organization. Oftentimes developers, testers, business analysts, and consultants are all potentially working on the same application – each requiring access either to different clusters or different components of the same cluster. It’s important that you provide the right level of access to your different users and revoke that access when necessary.&lt;/p&gt;

&lt;p&gt;Kubernetes is a well-coordinated system of multiple components like nodes, clusters, pods, containers, volumes and much more. At scale, you could have hundreds of these components spread over multiple clusters across the world. Identifying “who” needs “what” access to “which” resource becomes challenging. It’s only then you realize the need for a Kubernetes security tool that not only seamlessly integrates with your infrastructure but also gives you a secured and unified way of managing access to multiple clusters.&lt;/p&gt;

&lt;p&gt;Rafay’s Zero-Trust Access Service provides federated role-based access control with support for 3rd party identity management systems (eg, corp IdPs and cloud identities) and auditing of all actions eliminating the need to manually apply security best practices on a cluster-by-cluster basis. To find out more about Rafay’s approach to Kubernetes security, visit our &lt;a href="https://rafay.co/platform/kubernetes-cluster-application-security/"&gt;Kubernetes Cluster &amp;amp; Application Security&lt;/a&gt; page.&lt;/p&gt;

&lt;p&gt;Ready to find out why so many enterprises and platform teams have partnered with Rafay for fleet-wide security and governance for Kubernetes? &lt;a href="https://rafay.co/start/"&gt;Sign up for a free trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>cluster</category>
    </item>
    <item>
      <title>Run Containers and VMs Together with KubeVirt</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Thu, 07 Apr 2022 15:48:12 +0000</pubDate>
      <link>https://dev.to/kylekhunter/run-containers-and-vms-together-with-kubevirt-56ab</link>
      <guid>https://dev.to/kylekhunter/run-containers-and-vms-together-with-kubevirt-56ab</guid>
      <description>&lt;p&gt;Although many enterprises have deployed Kubernetes and containers, most also operate virtual machines. The two environments will likely co-exist for years, creating operational complexity, adding cost in terms of time and infrastructure.&lt;/p&gt;

&lt;p&gt;Without going into the pros and cons of one versus the other, it’s useful to remember that each virtual machine or VM contains its own instance of a full operating system and is intended to operate as if it were a standalone server—hence the name. In a containerized environment, by contrast, multiple containers share one instance of an operating system, almost always some flavor of Linux.&lt;/p&gt;

&lt;p&gt;Not all application services run well in containers, resulting in a need to run both.&lt;/p&gt;

&lt;p&gt;For example, a VM is better than a container for LDAP/Active Directory applications, tokenization applications, and applications with intensive GPU requirements. You may also have a legacy application that for some reason (no source code, licensing, deprecated language, etc.) can’t be modernized and therefore has to run in a VM, possibly against a specific OS like Windows.&lt;/p&gt;

&lt;p&gt;Whatever the reason your application requires VMs or containers, running and managing multiple environments increases the complexity of your operations, requiring separate control planes and possibly separate infrastructure stacks. That may not seem like a big deal if you just need to run one or a small set of VMs to support a single instance of an otherwise containerized application. But what if you have many such applications? And what if you need to run multiple instances of those apps across different cloud environments? Your operations can become very complicated very quickly. &lt;/p&gt;

&lt;p&gt;Wouldn’t it be great if you could run VMs as part of your Kubernetes environment?&lt;/p&gt;

&lt;p&gt;This is exactly what &lt;a href="https://kubevirt.io/"&gt;KubeVirt&lt;/a&gt; enables you to do. In this blog, I’ll dig into what KubeVirt is, the benefits of using it, and how Rafay integrates this technology so that you can get started using it right away.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is KubeVirt?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;KubeVirt is a Kubernetes add-on that enables Kubernetes to provision, manage, and control VMs on the same infrastructure as containers. An open source project under the auspices of &lt;a href="https://www.cncf.io/"&gt;the Cloud Native Computing Foundation (CNCF)&lt;/a&gt;, KubeVirt currently is in the &lt;a href="https://www.cncf.io/projects/kubevirt/"&gt;incubation phase&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This technology enables Kubernetes to schedule, deploy, and manage VMs using the same tools as containerized workloads, eliminating the need for a separate environment with different monitoring and management tools. This gives you the best of both worlds, VMs and Kubernetes working together.&lt;/p&gt;

&lt;p&gt;With KubeVirt, you can declaratively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a VM&lt;/li&gt;
&lt;li&gt;Schedule a VM on a Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Launch a VM&lt;/li&gt;
&lt;li&gt;Stop a VM&lt;/li&gt;
&lt;li&gt;Delete a VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your VMs run inside Kubernetes pods and utilize standard Kubernetes networking and storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HSGiUCjR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ucp7egio9v1ugo9ymor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HSGiUCjR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ucp7egio9v1ugo9ymor.png" alt="Image description" width="880" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://kubevirt.io/user-guide/architecture/"&gt;https://kubevirt.io/user-guide/architecture/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a deeper discussion of how KubeVirt works and the components involved, take a look at the blog &lt;a href="https://kubernetes.io/blog/2018/05/22/getting-to-know-kubevirt/"&gt;Getting to Know KubeVirt&lt;/a&gt; on &lt;a href="https://kubernetes.io/"&gt;kubernetes.io&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are the benefits of KubeVirt?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;KubeVirt integrates with existing Kubernetes tools and practices such as monitoring, logging, alerting, and auditing, providing significant benefits including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized management: Manage VMs and containers using a single set of tools.&lt;/li&gt;
&lt;li&gt;No hypervisor tax: Eliminate the need to license and run a hypervisor to run the VMs associated with your application.&lt;/li&gt;
&lt;li&gt;Predictable performance: For workloads that require predictable latency and performance, KubeVirt uses the Kubernetes CPU manager to pin vCPUs and RAM to a VM.&lt;/li&gt;
&lt;li&gt;CI/CD for VMs: Develop application services that run in VMs and integrate and deliver them using the same CI/CD tools that you use for containers.&lt;/li&gt;
&lt;li&gt;Authorization: KubeVirt comes with a set of predefined RBAC ClusterRoles that can be used to grant users permissions to access KubeVirt Resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Centralizing management of VMs and containers simplifies your infrastructure stack and offers a variety of less obvious benefits as well. For instance, adopting KubeVirt reduces the load on your DevOps teams by eliminating the need for separate VM and container pipelines, speeding daily operations. As you migrate more VMs to Kubernetes, you can see savings in software and utility costs, not to mention the hypervisor tax. In the long term you can decrease your infrastructure footprint just by leveraging Kubernetes’ ability to package and schedule your virtual applications.&lt;/p&gt;

&lt;p&gt;Kubernetes with KubeVirt provides faster time to market, reduced cost, and simplified management. Automating the lifecycle management of VMs using Kubernetes helps consolidate the CI/CD pipeline of your virtualized and containerized applications. With Kubernetes as an orchestrator, changes in either type of application can be similarly tested, and safely deployed, reducing the risk of manual errors and enabling faster iteration. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;KubeVirt: Challenges and Best Practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are a few things to keep in mind if you’re deploying KubeVirt. As I mentioned above, one of the reasons you may want to run a VM instead of a container is for specialized hardware like GPUs. If this applies to your workload, you’ll need to make sure that at least one node in your cluster contains the necessary hardware and then pin the pod containing the VM to the node(s) with that hardware.&lt;/p&gt;

&lt;p&gt;As with any Kubernetes add-on, managing KubeVirt when you have a fleet of clusters—possibly running in multiple, different environments—becomes more challenging. It’s important to ensure technology is deployed the same way in each cluster, and possibly tailored to the hardware available.&lt;/p&gt;

&lt;p&gt;Finally, Kubernetes skills are in short supply. Running a VM on KubeVirt generally requires the ability to understand and edit YAML configuration files. You’ll need to make sure that everyone who needs to deploy VMs on KubeVirt has the skills and tools to do so from developers to operators.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;KubeVirt at Rafay&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Rafay’s &lt;a href="https://rafay.co/platform/kubernetes-operations-platform/"&gt;Kubernetes Operations Platform&lt;/a&gt; (KOP) is the ideal solution for companies that want to deploy and manage KubeVirt across a fleet of Kubernetes clusters. With Rafay, you are able to deploy KubeVirt with the right configuration everywhere you need it, with tools to make your team productive right away.&lt;/p&gt;

&lt;p&gt;Rafay’s support for VMs includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streamlined Admin Experience: Add the VM Operator to a &lt;a href="https://rafay.co/platform/cluster-blueprints/"&gt;cluster blueprint&lt;/a&gt; and apply it to a fleet of clusters. Rafay automatically deploys the necessary virtualization components on the target clusters.&lt;/li&gt;
&lt;li&gt;Standardization and Consistency: Using the VM Operator as part of a cluster blueprint enables you to achieve standardization and consistency across a fleet of clusters.&lt;/li&gt;
&lt;li&gt;VM Wizard: There is no Kubernetes learning curve. Simply provide the ISO image for your VMs and use the Rafay-provided VM Wizard to configure, deploy, and operate VMs on Kubernetes.&lt;/li&gt;
&lt;li&gt;Multi-Cluster Deployments: Use Rafay’s sophisticated, multi-cluster placement policies to deploy and operate VMs across a fleet of remote Kubernetes clusters in a single operation.&lt;/li&gt;
&lt;li&gt;Integrated Monitoring and Secure Remote Diagnostics: Centrally monitor the status and health of VMs deployed across your environment. Receive alerts and notifications if there are operational issues. Remotely diagnose and repair operational issues, even on remote clusters behind firewalls.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The market demand for operating VMs and containers on a single unified operations platform is growing rapidly. With Rafay KOP, you are able to run legacy applications with the same underlying orchestration as cloud-native applications across a fleet of clusters distributed across cloud, data center, and remote/edge environments, eliminating the complexity of separate VM and container environments. Rigorous QA and certification testing processes ensure that Rafay’s KubeVirt implementation is stable and performs as expected, even as the underlying code evolves.&lt;/p&gt;

&lt;p&gt;To get started deploying VMs on Kubernetes with Rafay, all you have to do is create a custom blueprint and select the VM Operator from the Managed System Add-Ons drop-down. KubeVirt components are then automatically deployed on any cluster this blueprint is applied to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--en9_Yj-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svyqyff8wowiqkfpblcm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--en9_Yj-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svyqyff8wowiqkfpblcm.png" alt="Image description" width="880" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ready to find out why so many enterprises and platform teams have partnered with Rafay to streamline Kubernetes operations?  &lt;a href="https://rafay.co/start/"&gt;Sign up for a free trial&lt;/a&gt; today and learn more about running virtualized workloads on Rafay.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>containers</category>
      <category>vms</category>
    </item>
    <item>
      <title>Best Practices, Tools, and Approaches for Kubernetes Monitoring</title>
      <dc:creator>Kyle Hunter</dc:creator>
      <pubDate>Mon, 28 Mar 2022 22:18:23 +0000</pubDate>
      <link>https://dev.to/kylekhunter/best-practices-tools-and-approaches-for-kubernetes-monitoring-4acp</link>
      <guid>https://dev.to/kylekhunter/best-practices-tools-and-approaches-for-kubernetes-monitoring-4acp</guid>
      <description>&lt;p&gt;In a Kubernetes environment, applications operate across multiple nodes within a cluster, and application services can be distributed across multiple clusters and multiple clouds, making tracking the health of an application and the infrastructure it depends on quite challenging.&lt;/p&gt;

&lt;p&gt;Kubernetes monitoring is the process of gathering metrics from the Kubernetes clusters you operate to identify critical events and ensure that all hardware, software, and applications are operating as expected. Aggregating metrics in a central location will help you understand and protect the health of your entire Kubernetes fleet and the applications and services running on it.&lt;/p&gt;

&lt;p&gt;Between the layers of abstraction created by containerization and Kubernetes, and the dynamic nature of applications running in a K8s environment, monitoring everything can be a challenge. Fortunately a number of open source Kubernetes monitoring tools—as well as popular commercial tools—exist to make monitoring easier.&lt;/p&gt;

&lt;p&gt;This blog examines some of the available Kubernetes monitoring and Kubernetes logging tools, including Prometheus for monitoring and Grafana for visualization and dashboards. It also explains how Rafay’s Visibility and Monitoring Service enhances your teams’ Kubernetes monitoring ability.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Kubernetes Ecosystem Tools for Logging and Monitoring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are a variety of popular tools that can enhance your Kubernetes container monitoring efforts. Some of the most common ones include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://prometheus.io/docs/introduction/overview/"&gt;Prometheus&lt;/a&gt;&lt;/strong&gt;: An open-source event monitoring and alerting tool that collects and stores metrics as time series data. Prometheus joined the &lt;a href="https://www.cncf.io/"&gt;Cloud Native Computing Foundation&lt;/a&gt; in 2016 as the second hosted project after Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt;&lt;/strong&gt;: A fully managed visualization platform for applications and infrastructure that works with monitoring software such as Prometheus. Grafana provides capabilities to collect, store, visualize, and alert on data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://thanos.io/"&gt;Thanos&lt;/a&gt;&lt;/strong&gt;: A metric system that provides a simple and cost-effective way to centralize and scale Prometheus-based monitoring systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.elastic.co/elasticsearch/"&gt;Elasticsearch&lt;/a&gt;&lt;/strong&gt;: A distributed, JSON-based search and analytics engine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.elastic.co/logstash/"&gt;Logstash&lt;/a&gt;&lt;/strong&gt;: An open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite stash.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.elastic.co/kibana/"&gt;Kibana&lt;/a&gt;&lt;/strong&gt;: A data visualization and exploration tool used for log and time-series analytics, application monitoring, and operational intelligence use cases.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Which Kubernetes Monitoring Tools Should You Choose?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Many teams use these monitoring and logging tools alone or in combination to create their own solutions and address specific container monitoring and Kubernetes application monitoring needs. One of the most commonly used combinations is Prometheus plus Grafana. Prometheus enables you to gather time-series data from both hardware and software sources, while Grafana lets you visualize the data that Prometheus collects. &lt;/p&gt;

&lt;p&gt;Another popular combination is Elasticsearch plus Logstash plus Kibana, often referred to as ELK stack or &lt;a href="https://www.elastic.co/elastic-stack/"&gt;Elastic Stack&lt;/a&gt;, and all available through &lt;a href="https://www.elastic.co/"&gt;Elastic&lt;/a&gt;. While Elastic is itself a for-profit company, these components are free and open source.&lt;/p&gt;

&lt;p&gt;Implementing any of the above tools, whether singly or in combination, necessarily creates a certain amount of complexity, especially as your Kubernetes fleet grows to include many clusters—potentially running different K8s distributions in different cloud environments. &lt;/p&gt;

&lt;p&gt;Managing a Prometheus config at scale may become a challenge due to &lt;a href="https://thenewstack.io/3-key-configuration-challenges-for-kubernetes-monitoring-with-prometheus/"&gt;app onboarding issues, manual configuration requirements, and configuration drift&lt;/a&gt;. While Prometheus and Grafana work well together for individual clusters, in multi-cluster environments you may have to add &lt;a href="https://thanos.io/"&gt;Thanos&lt;/a&gt; to your toolset to aggregate data and provide long-term storage and a global view. Still you may face limitations with data retention and HA that cause some to prefer ELK stack.&lt;/p&gt;

&lt;p&gt;Because of this complexity, many organizations prefer monitoring as a service using commercial solutions such as Datadog, Cloudwatch, and New Relic. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Rafay Simplifies Kubernetes Monitoring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Rafay Visibility and Monitoring Service is a cloud-based service that unifies monitoring, alerting, and visualization for all your Kubernetes clusters and applications, reducing mean time to recovery (MTTR) by up to 60%.&lt;/p&gt;

&lt;p&gt;Rafay’s service provides a single pane of glass (SPOG), enabling centralized Kubernetes logging and management for your entire K8s fleet, spanning multi-cluster, multi-cloud, and edge deployments. Contextual, role-based dashboards let your team drill deeper into your K8s environment, providing enterprise-wide insights at  a project, cluster, node, application, pod, or container level. From Rafay dashboards, you can see a wide range of Kubernetes metrics and events including resources consumed, user and access activity, critical alerts, and the overall health of every cluster and application deployed. You can instantly visualize, diagnose, and resolve incidents by interactively drilling down and identifying issues quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ajxDjxAB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20n93qnbhstrmrmcmabd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ajxDjxAB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20n93qnbhstrmrmcmabd.png" alt="Image description" width="880" height="627"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Rafay Controller provides a web-based view of the entire fleet of Kubernetes clusters under management. When the Visibility and Monitoring Service is enabled, Prometheus and related addons are automatically deployed on your clusters, and metrics are automatically scraped and aggregated to a centralized time series database for all clusters. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--20r2JRiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/57soxloz8e10s5yjmwc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--20r2JRiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/57soxloz8e10s5yjmwc5.png" alt="Image description" width="880" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Integrate with the Tools You Rely On&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Rafay integrates with a variety of popular management tools and services including &lt;a href="https://docs.rafay.co/recipes/monitoring/amazon-prometheus/overview/"&gt;Amazon Prometheus&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/cloudwatch/cloudwatch/"&gt;CloudWatch&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/datadog/"&gt;Datadog&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/grafana/"&gt;Grafana&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/newrelic/"&gt;New Relic&lt;/a&gt;, &lt;a href="https://docs.rafay.co/recipes/monitoring/splunk/"&gt;Splunk&lt;/a&gt;, and the &lt;a href="https://docs.rafay.co/recipes/monitoring/prometheus_operator/"&gt;Prometheus Operator&lt;/a&gt; (for custom Prometheus). If you utilize or plan to use these tools, Rafay can standardize the deployment and config of the necessary components. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Streamline Visibility and Monitoring with Rafay&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To discover how Rafay can help you standardize visibility and monitoring across your entire fleet of K8s clusters, take a closer look at Rafay’s &lt;a href="https://rafay.co/platform/visibility-monitoring-service/"&gt;Visibility and Monitoring Service&lt;/a&gt;. Rafay’s &lt;a href="https://rafay.co/platform/kubernetes-operations-platform/"&gt;Kubernetes Operations Platform&lt;/a&gt; delivers the visibility, monitoring, and other capabilities you need to ensure the success of your multi-cloud, multi-cluster Kubernetes environment.&lt;/p&gt;

&lt;p&gt;Ready to find out why so many enterprises and platform teams have partnered with Rafay to streamline Kubernetes operations?  &lt;a href="https://rafay.co/start/"&gt;Sign up for a free trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>cloud</category>
      <category>bestpractices</category>
    </item>
  </channel>
</rss>
