<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rich Burroughs</title>
    <description>The latest articles on DEV Community by Rich Burroughs (@richburroughs).</description>
    <link>https://dev.to/richburroughs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/richburroughs"/>
    <language>en</language>
    <item>
      <title>Cyclops: Platform Engineering for the Rest of Us</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Mon, 03 Feb 2025 17:52:36 +0000</pubDate>
      <link>https://dev.to/cyclops-ui/cyclops-platform-engineering-for-the-rest-of-us-57gf</link>
      <guid>https://dev.to/cyclops-ui/cyclops-platform-engineering-for-the-rest-of-us-57gf</guid>
      <description>&lt;p&gt;Platform engineering is possibly the biggest concept to take hold in infrastructure over the last 5+ years, and there’s a big reason why. For decades, application engineers have dealt with systems that have constantly thrown roadblocks and delays in their way. Platform engineers address this problem by building systems that enable self-service and provide useful abstractions that help those other engineers build and run their applications. As we know, enabling self-service helps both productivity and developer happiness.&lt;/p&gt;

&lt;p&gt;However, building and maintaining a platform can be expensive, and not every organization has the budget for a dedicated platform team. For organizations with platform teams, there’s an ongoing tension between either building custom tools, using open source tools, or buying commercial solutions. Each of those options requires some kind of investment, but for many teams, building platforms largely composed of open source tools is a strong choice. The team doesn’t spend a lot on software licenses and isn’t stuck maintaining all the code.&lt;/p&gt;

&lt;p&gt;In this post, we’ll look at &lt;a href="https://cyclops-ui.com/" rel="noopener noreferrer"&gt;Cyclops&lt;/a&gt;, an &lt;a href="https://github.com/cyclops-ui/cyclops" rel="noopener noreferrer"&gt;open source tool&lt;/a&gt; for building developer platforms. Cyclops lets teams deploy and manage applications using Helm charts as templates. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;The main requirement for this tutorial is a Kubernetes cluster. In this example, we’ll use &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt; to provision a cluster, but feel free to use a different method if you prefer. If you’re not already using kind, you can install it using the &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installation" rel="noopener noreferrer"&gt;instructions in the docs&lt;/a&gt;. You also need a local Docker-compatible daemon to use kind, like Docker, Podman, or Colima.&lt;/p&gt;

&lt;p&gt;You will also need kubectl. You can find instructions for installing kubectl in the &lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl" rel="noopener noreferrer"&gt;Kubernetes docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provision a cluster and install Cyclops
&lt;/h2&gt;

&lt;p&gt;First, provision a cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --name cyclops-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your kube context should already be set to point to the kind cluster. You can test that you can connect by viewing the namespaces in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, install Cyclops into your cluster with kubectl.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.15.4/install/cyclops-install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you’ll see from the output, Cyclops is composed of Kubernetes native objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;customresourcedefinition.apiextensions.k8s.io/modules.cyclops-ui.com created
customresourcedefinition.apiextensions.k8s.io/templateauthrules.cyclops-ui.com created
customresourcedefinition.apiextensions.k8s.io/templatestores.cyclops-ui.com created
namespace/cyclops created
serviceaccount/cyclops-ctrl created
clusterrole.rbac.authorization.k8s.io/cyclops-ctrl created
clusterrolebinding.rbac.authorization.k8s.io/cyclops-ctrl created
deployment.apps/cyclops-ui created
service/cyclops-ui created
networkpolicy.networking.k8s.io/cyclops-ui created
deployment.apps/cyclops-ctrl created
service/cyclops-ctrl created
networkpolicy.networking.k8s.io/cyclops-ctrl created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There should now be two pods running in a new namespace called Cyclops. We can view them with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n cyclops
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                            READY   STATUS    RESTARTS   AGE
cyclops-ctrl-7984df7589-wv4dw   1/1     Running   0          36s
cyclops-ui-64c4cdd7f7-fxnb7     1/1     Running   0          36s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first pod handles the Cyclops API, manages the CRDs, and communicates with the Kubernetes API server. The second pod runs the Cyclops web UI.&lt;/p&gt;

&lt;p&gt;Next, we will install a set of example templates that we will use to see how Cyclops works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.15.4/install/demo-templates.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, let’s connect to the Cyclops web UI. First, forward a port to it using kubectl.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/cyclops-ui 3000:3000 -n cyclops
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, connect to the UI with your browser at &lt;a href="http://localhost:3000/" rel="noopener noreferrer"&gt;http://localhost:3000/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Niginx with a generic app template
&lt;/h2&gt;

&lt;p&gt;For the first portion of the demo, we’ll deploy Nginx using Cyclops.&lt;/p&gt;

&lt;p&gt;Open a new terminal window/tab and create a namespace called nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, go back to the Cyclops UI tab in your browser. Click &lt;em&gt;Add module&lt;/em&gt; in the upper right corner of the Cyclops screen. Think of a module in Cyclops as an application and all of the other Kubernetes resources required to run it.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulpv0j2qksghr1gi3ik6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulpv0j2qksghr1gi3ik6.png" alt="the add module screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we’ll select a template. Cyclops templates are Helm charts, and Cyclops can use charts from GitHub repositories, Helm chart repositories, or OCI repositories.&lt;/p&gt;

&lt;p&gt;Select &lt;code&gt;app-template&lt;/code&gt; from the &lt;em&gt;Template&lt;/em&gt; pull-down list. This generic template creates a Kubernetes deployment and service for an application.&lt;/p&gt;

&lt;p&gt;For the module name, enter &lt;code&gt;nginx&lt;/code&gt;. Click on &lt;em&gt;Advanced&lt;/em&gt;. Select the &lt;code&gt;nginx&lt;/code&gt; namespace from the &lt;em&gt;Target namespace&lt;/em&gt; pull-down menu.   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farc8a8yy9n35u7j6hafk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Farc8a8yy9n35u7j6hafk.png" alt="the define module screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;em&gt;General&lt;/em&gt;. You can see that some defaults have been populated, like the image name version. Leave them set to the defaults. Those values are being pulled from a values.yaml file &lt;a href="https://github.com/cyclops-ui/templates/blob/main/app-template/values.yaml" rel="noopener noreferrer"&gt;in the Helm chart&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You’ll see some other configurable options under &lt;em&gt;Scaling&lt;/em&gt; and &lt;em&gt;Networking&lt;/em&gt;. You can also leave those set to the defaults. Scroll down and hit the &lt;em&gt;Deploy&lt;/em&gt; button in the bottom right of the window.&lt;/p&gt;

&lt;p&gt;On the next screen, you’ll see more information about the module and its status as it’s deployed. There’s a link to the template it was deployed from and information about the Kubernetes resources that have been created.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw17qy9s648thhxmz0yhf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw17qy9s648thhxmz0yhf.png" alt="the module status screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;em&gt;nginx Deployment&lt;/em&gt; to see that one pod is running. We can confirm that info in the terminal by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That output should match what you saw in Cyclops.&lt;/p&gt;

&lt;p&gt;Go back to the Cyclops browser tab. Under &lt;em&gt;Actions&lt;/em&gt;, you can see several buttons for making changes to a running module. &lt;em&gt;Edit&lt;/em&gt; will let you change the configuration. &lt;em&gt;Reconcile&lt;/em&gt; will re-create the resources. &lt;em&gt;Rollback&lt;/em&gt; will revert to the previous version of the module if you have made changes. You can view the code for the module using the &lt;em&gt;Module manifest&lt;/em&gt; button, and *Rendered manifes*t lets you view the Kubernetes YAML for the running resources.&lt;/p&gt;

&lt;p&gt;Click &lt;em&gt;Edit&lt;/em&gt;, and then change the number of replicas to 2 under &lt;em&gt;Scaling&lt;/em&gt;. Then, hit the &lt;em&gt;Deploy&lt;/em&gt; button.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqx7ejxuph90o18d1yo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqx7ejxuph90o18d1yo7.png" alt="The edit module screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the status goes back to green, hit the &lt;em&gt;Rollback&lt;/em&gt; button. You’ll see a list of the module's previous generations (versions). There should just be one. Click the Rollback button for that generation, and you’ll see a diff of the changes that were made.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgub37webjnkxnhup5ib0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgub37webjnkxnhup5ib0.png" alt="The diff on the rollback screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;em&gt;OK&lt;/em&gt; button to perform the rollback.&lt;/p&gt;

&lt;p&gt;Some teams won’t want to manage changes through the Cyclops UI, of course. You can instead use Cyclops with GitOps tools like Argo CD. There’s a GitHub repo with examples &lt;a href="https://github.com/cyclops-ui/gitops-starter" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, let’s clean up. Click &lt;em&gt;Delete&lt;/em&gt;, and type in the module name to confirm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf1e7zdlivlnnkprpscs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf1e7zdlivlnnkprpscs.png" alt="The delete module screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Redis with Cyclops
&lt;/h2&gt;

&lt;p&gt;We’ve seen how to deploy an app with a custom Helm chart, but Cyclops can also deploy applications using existing Helm charts. Let’s look at how that works by deploying Redis using the Bitnami chart, which is included with Cyclops.&lt;/p&gt;

&lt;p&gt;At the terminal, create a namespace called &lt;code&gt;redis&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Cyclops browser tab, click &lt;em&gt;Templates&lt;/em&gt; in the left nav bar. Type “redis” in the search box to see the info for the installed template. The source for it is the Bitnami GitHub repo, and it’s coming from the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtglm4uymshduets05v6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtglm4uymshduets05v6.png" alt="The template search results" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, click on &lt;em&gt;Modules&lt;/em&gt; in the left nav and then &lt;em&gt;Add module&lt;/em&gt; again. Select &lt;code&gt;redis&lt;/code&gt; from the module pulldown list, and type in “redis-cache” for the module name. Click on &lt;em&gt;Advanced&lt;/em&gt; and select &lt;code&gt;redis&lt;/code&gt; from the list of namespaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgip3b0ukxsjws8typ42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgip3b0ukxsjws8typ42.png" alt="The add module screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down. You will see many other options that can be customized for the Redis install. Feel free to expand and view any that interest you, but leave them set to the defaults. Then, click the &lt;em&gt;Deploy&lt;/em&gt; button in the bottom right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46nz3d0sok5tly8vit2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46nz3d0sok5tly8vit2q.png" alt="The deploy module screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It may take a bit for Kubernetes to spin up all the resources, but eventually, they should all go green.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfejsai7yilj81dri26o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfejsai7yilj81dri26o.png" alt="the module status screen" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can confirm the pods are running with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the primary/master node and three replicas like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          4m6s
redis-replicas-0   1/1     Running   0          2m21s
redis-replicas-1   1/1     Running   0          3m10s
redis-replicas-2   1/1     Running   0          2m46s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use any Helm chart with Cyclops like this to allow users to deploy the applications they need to run. In the case of a complex module like this one, you can set default values that are needed for your team or even create a custom Helm chart that exposes fewer options.&lt;/p&gt;

&lt;p&gt;That’s it for the tutorial. To clean up, you can delete the kind cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind delete cluster -n cyclops-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We’ve learned how to deploy applications with Cyclops using pre-existing Helm charts and custom ones, and we’ve seen how Cyclops allows teams to easily expose the abstractions they need for developers to deploy and manage their apps.&lt;/p&gt;

&lt;p&gt;Whether you’re at an organization that’s not staffed for a dedicated platform team or your platform team would like an easy way to provide self-service for developers, Cyclops could be a great fit for your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn more
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;a href="https://cyclops-ui.com/docs/about/" rel="noopener noreferrer"&gt;Cyclops docs&lt;/a&gt; are a great place to start.
&lt;/li&gt;
&lt;li&gt;Check out their &lt;a href="https://github.com/cyclops-ui/cyclops" rel="noopener noreferrer"&gt;open-source repository&lt;/a&gt; and support them with a star.
&lt;/li&gt;
&lt;li&gt;There’s a &lt;a href="https://discord.com/invite/8ErnK3qDb3" rel="noopener noreferrer"&gt;community Discord&lt;/a&gt; if you’d like help or to give your feedback.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>KubeCon Chicago Wrapup</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Tue, 21 Nov 2023 22:29:53 +0000</pubDate>
      <link>https://dev.to/richburroughs/kubecon-chicago-wrapup-ebm</link>
      <guid>https://dev.to/richburroughs/kubecon-chicago-wrapup-ebm</guid>
      <description>&lt;p&gt;&lt;a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/"&gt;KubeCon + CloudNativeCon North America 2023&lt;/a&gt; was held in Chicago from November 6-9. This was the third North American KubeCon since the start of the COVID-19 pandemic.&lt;/p&gt;

&lt;p&gt;I wrote my first KubeCon wrapup post for KubeCon San Diego in 2019. If you've read the past wrapups, you'll know that I developed a specific style for them. I live-tweeted about the talks I attended and other happenings and then pulled in my tweets and others for the posts.&lt;/p&gt;

&lt;p&gt;Given the weird place that X/Twitter is at and the engagement problems many people have noticed, I decided not to live tweet this time. I took notes and wrote up my thoughts afterward.&lt;/p&gt;

&lt;p&gt;This was also my last KubeCon with Loft Labs. I've had a lot of fun talking about the company's tools (especially vCluster) with folks in the Kubernetes community, but it's time for a new challenge for me. I also really need a break. If you know someone in the cloud native space looking for a Developer Relations person, feel free to have them reach out to me as long as they can be a little patient with the start date. The best way to reach me is &lt;a href="https://linkedin.com/in/richburroughs"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, if you'd like to view the videos for the talks I recommend they should be posted within a couple of weeks on the &lt;a href="https://www.youtube.com/c/cloudnativefdn"&gt;CNCF's YouTube channel&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Conference Activities
&lt;/h2&gt;

&lt;p&gt;I spoke at KubeCon Detroit last year and at KubeCon Amsterdam in April, but I didn't have any talks accepted this time. I did present at two events before the conference proper, though.&lt;/p&gt;

&lt;p&gt;My first talk was at &lt;a href="https://cloud-native.rejekts.io/"&gt;Cloud Native Rejekts&lt;/a&gt;, one of my favorite community conferences. If you're unfamiliar with Rejekts, the idea is to give a space for people to present their ideas that weren't accepted for KubeCon. The speakers and talks are always very high quality, and a lot of my favorite people in the community show up. My talk was called Open Source Dev Containers with DevPod, and I had a lot of fun presenting it. I talked through the struggles involved with providing easy-to-use and repeatable dev environments and did a demo of DevPod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0c3bYahn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2b4prha160y3bweno1yx.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0c3bYahn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2b4prha160y3bweno1yx.jpeg" alt="Adrian Mout from Chainguard speaking at Cloud Native Rejekts" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Adrian Mout from Chainguard speaking at Cloud Native Rejekts&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I also did a lightning talk about vCluster at Multi-TenancyCon, one of the KubeCon co-located events. I didn't have much time to attend the rest of the event, but multi-tenancy is a topic that really interests me. It was fun to speak at co-located events, and at some point the CNCF started giving free KubeCon registrations to the co-located event speakers, so that was nice.&lt;/p&gt;

&lt;p&gt;The one downside of attending these events is that they make the KubeCon week much longer. In my case, the trip went from five days for the conference proper to eight days. It was worth it for me, but it's something to consider.&lt;/p&gt;

&lt;p&gt;I also attended the Lightning talks on Monday evening, which were a lot of fun. One of my favorites was the talk about the CNCF's &lt;a href="https://contribute.cncf.io/about/deaf-and-hard-of-hearing/"&gt;Deaf and Hard of Hearing Working Group&lt;/a&gt;. I wasn't aware of their work around making events and meetings more accessible, and it's fantastic. I also loved Tim Hockin's talk about the problems with the Service primitive in Kubernetes and the Gateway API.&lt;/p&gt;

&lt;p&gt;This was my first time making it to the Lightning Talks, and I will be back in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day One - Tuesday
&lt;/h2&gt;

&lt;p&gt;This time around, the KubeCon schedule changed from Wednesday-Friday to Tuesday-Thursday. I liked that change as it meant traveling home on Friday instead of on the weekend. Thank you CNCF for that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keynotes
&lt;/h3&gt;

&lt;p&gt;The opening morning's keynotes focused a lot on running AI/ML workloads. I didn't sense as strong a theme at this KubeCon compared to some in the past, but if there was a central theme, that was it. My friend Joseph Sandoval did a panel on AI/ML, and Taylor Dolezal did a panel with end users.&lt;/p&gt;

&lt;p&gt;My favorite part of these keynotes was the panel about sustainability called Environmental Sustainability in the Cloud Is Not a Mythical Creature, hosted by Frederick Kautz. This is a super important topic, and I'm always happy to hear about the advances in this area. The talk mentioned Kepler, a tool I've wanted to look at.&lt;/p&gt;

&lt;p&gt;There were also updates from the CNCF's graduated projects, which now include Cilium and Istio.&lt;/p&gt;

&lt;p&gt;The CNCF also put together a very nice In Memorium video with tributes to folks in the cloud native community who died this year. I was happy to see my friends Kris Nova and Carolyn Van Slyck included. They were both fantastic people who contributed to the community, and their losses were huge blows. There is a &lt;a href="https://github.com/cncf/memorials"&gt;GitHub repo&lt;/a&gt; where you can add your memories of those folks.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Practical Guide to eBPF Licensing: Or How I Learned to Stop Worrying and Love the GPL - Jef Spaleta &amp;amp; Bill Mulligan, Isovalent
&lt;/h3&gt;

&lt;p&gt;I don't focus much on open source licensing, but I wanted to learn how it impacts projects using eBPF. Jeff and Bill both work at Isovalent, and they did a thorough job explaining the situation. The portions of eBPF projects that run in the kernel are required to use the GPL, but CNCF projects must use the Apache 2.0 license. The recommendation was to use the GPL for the kernel bits and Apache 2.0 for the parts of the software that run in userspace. The CNCF has made an exception for projects that use eBPF, allowing them to use the GPL for the code that runs in the kernel.&lt;/p&gt;

&lt;p&gt;This was a situation that I wasn't aware of at all. If you are working with eBPF projects, this talk is worth watching.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Is a Secure Connection Not Encrypted? and Other Stories - Liz Rice, Isovalent
&lt;/h3&gt;

&lt;p&gt;Next, I went to another eBPF talk. If you have read my last few KubeCon wrapups, you know already that it's an interest of mine. I'm also a big fan of Liz's. We had &lt;a href="https://share.transistor.fm/s/4eb55f1f"&gt;a great conversation&lt;/a&gt; on my podcast Kube Cuddle last year.&lt;/p&gt;

&lt;p&gt;This talk covered how Cilium and other service meshes handle encryption and identity. It's a bit hard for me to sum up because it was pretty technical. Networking also isn't my strongest area. But if you are interested in networking and Cilium, check this one out. Liz is a great speaker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demystifying Service Mesh: Separating Hype from Practicality - Brian Redmond &amp;amp; Ally Ford, Microsoft
&lt;/h3&gt;

&lt;p&gt;This was an engaging introduction to service mesh. It focused on the features most meshes provide, like observability, tracing, traffic management, blue/green deploys, canary testing, A/B testing, and even fault injection.&lt;/p&gt;

&lt;p&gt;You may have heard the term Progressive Delivery (&lt;a href="https://redmonk.com/jgovernor/2018/08/06/towards-progressive-delivery/"&gt;coined by James Governor&lt;/a&gt;) used to refer to some of these practices, as well as things like feature flags. Progressive delivery practices allow teams to deploy more safely, and deployment safety is a huge factor in developing high-performing teams. If you've ever been on call, you understand. Service meshes make things like using canaries to test and having rollbacks easy to implement.&lt;/p&gt;

&lt;p&gt;If you are new to service mesh and want to see what it can offer your team, check out this talk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Mesh Battle Scars: Technology, Timing, and Tradeoffs - Keith Mattix, Microsoft; John Howard, Google; Lin Sun, &lt;a href="http://solo.io"&gt;solo.io&lt;/a&gt;; Thomas Graf, Isovalent; Flynn, Buoyant
&lt;/h3&gt;

&lt;p&gt;Could I go to yet another talk about service mesh? Yes, I could. This one was very different, though. It was a panel hosted by Keith Mattix, and the panelists represented Cilium (Thomas), Istio (Lin and John), and Linkerd (Flynn).&lt;/p&gt;

&lt;p&gt;The focus was on areas where the approaches of the tools differ, and things got a bit spicy at times (which was intended). Keith was a very entertaining moderator, and the panelists were all experts in their fields. If you are interested in the differences between these projects or like panels that get a bit contentious, this presentation is for you. I did enjoy it, and it was great to have something entertaining at the end of the day.&lt;/p&gt;

&lt;p&gt;I decided to take it easy in the evening after the talks. This was already day five of my trip, and I was feeling it. These big conferences like KubeCon can be very draining, so I focus on pacing myself. If you are new to this kind of conference, it's important not to feel like you have to do everything possible. It's okay to take a break and recharge or to spend time on the hallway track instead of seeing a talk in every slot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day Two - Wednesday
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Keynotes
&lt;/h3&gt;

&lt;p&gt;The day two keynotes began with a talk from Hermanth Malla and Laurent Bernaille of Datadog, who talked through an incident that caused an almost 24-hour outage for Datadog. That outage would be rough for any application, but I'm sure many customers were caught without a backup method to observe their systems. I have a lot of respect for folks who will talk openly about outages like this and share learnings with the community.&lt;/p&gt;

&lt;p&gt;Other highlights for me were a panel on inclusion and Jeremy Rickard from Microsoft talking about Long Term Support (LTS) for Kubernetes. A Kubernetes Enhancement Proposal (KEP) is open to change the supported period from 9 months to a year, which is a great idea.&lt;/p&gt;

&lt;p&gt;The Community Awards have always been a favorite part of KubeCon for me. Those folks put a lot of time and energy into improving the community, and it's great to see them get recognized for it. You can see a list of the award winners in this &lt;a href="https://www.cncf.io/announcements/2023/11/08/cloud-native-computing-foundation-announces-2023-community-awards-winners/"&gt;CNCF blog post&lt;/a&gt;. Congratulations to all of them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zcML9eXe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vphasqnn1oog7jz3t33.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zcML9eXe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vphasqnn1oog7jz3t33.jpeg" alt="Winner of the 2023 Top Documentarian award, Divya Mohan" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Winner of the 2023 Top Documentarian award, Divya Mohan&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Learning Kubernetes by Chaos – Breaking a Kubernetes Cluster to Understand the Components - Ricardo Katz, VMware &amp;amp; Anderson Duboc, Google Cloud
&lt;/h3&gt;

&lt;p&gt;This was one of my favorite talks of the conference. The premise was to fix a broken kind cluster bit by bit, and the speakers explained the different components of the cluster as they fixed them (apiserver, controller manager, scheduler, etc.). I don't want to spoil any of the jokes, so I will leave it by saying this was a brilliant combination of humor and education. I highly recommend watching it, especially to beginners.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dungeons and Deployments: Leveling up in Kubernetes - Noah Abrahams, Oracle; Natali Vlatko, Cisco; Kat Cosgrove, Dell; Seth McCombs, AcuityMD
&lt;/h3&gt;

&lt;p&gt;The other talk I saw on Wednesday was another one filled with jokes. In this one, the speakers explained some parts of Kubernetes by playing a tabletop role-playing game. There were a lot of jokes and plenty of puns that left people in the audience groaning. This one was heavier on the humor than the education, but it was a lot of fun.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentary Film - eBPF: Unlocking the Kernel
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UQlU1G0t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1sht47dog3m52nti6dau.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UQlU1G0t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1sht47dog3m52nti6dau.jpeg" alt="The poster for the film" width="414" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was very much looking forward to the premiere of the new documentary about the creation and growth of eBPF. I loved the &lt;a href="https://www.youtube.com/watch?v=BE77h7dmoQU"&gt;Kubernetes Documentary&lt;/a&gt; from the same filmmakers, but I knew the Kubernetes story better going in than I knew this one.&lt;/p&gt;

&lt;p&gt;I was initially introduced to eBPF through Brendan Gregg, who was at Netflix back then. Brendan was posting on Twitter about the Linux performance flame graphs he generated with eBPF, and I saw him speak at SRECon about that topic. But it was several years later before I understood that eBPF can do much more, including networking and other observability.&lt;/p&gt;

&lt;p&gt;I could go on a lot more about the film, but I think I will write a separate review of it soon. So, for now, I will leave off by saying that I recommend it for folks interested in eBFP and that you can watch it for free &lt;a href="https://www.youtube.com/watch?v=Wb_vD3XZYOA"&gt;on YouTube&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After the film, I headed over with some of the Isovalent folks to their post-event party. I saw a lot of friends and had a great time. I feel fortunate that I've been able to connect with so many people in this community and learn from them, whether it's about open source tools or other aspects of what we do, like community and inclusion. Thanks to Isovalent for throwing such a fun party.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day Three - Thursday
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Keynotes
&lt;/h3&gt;

&lt;p&gt;I was dragging by Thursday, day seven of my trip, so I missed the keynotes. I heard from multiple people that Tim Hockin's keynote was great, though, so I watched it afterward on the conference platform.&lt;/p&gt;

&lt;p&gt;Tim's talk was called Kubernetes in the Second Decade. It was a very interesting look from a Kubernetes expert at what directions the project should take in the next ten years and what the challenges are. Tim covered topics like running AI/ML workloads, multi-cluster, complexity, and reliability. He introduced the concept of a complexity budget, which I loved. He said that there's a finite amount of complexity we can add to Kubernetes and that we need to say no to some things now so we can do other cool things later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3Vd7ZVD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/alliqyi8mbhgp3ibgfrw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3Vd7ZVD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/alliqyi8mbhgp3ibgfrw.jpeg" alt="A screenshot from the conference streaming platform. There's a cartoon of Tim's face with the quote, &amp;quot;The impact of AI/ML will be on the same scale as the impact of the Internet itself." width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I strongly recommend watching Tim's talk when the videos are released.&lt;/p&gt;

&lt;p&gt;Despite my final-day fatigue, I made it to a couple of sessions on Thursday.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sidecar Containers Are Built-in to Kubernetes: What, How, and Why Now? - Todd Neal, Amazon &amp;amp; Sergey Kanzhelev, Google
&lt;/h3&gt;

&lt;p&gt;Making sidecar containers first-class citizens is a change I wasn't aware of before I saw the KubeCon schedule. This talk explained how the sidecar containers work, and this talk should be helpful for many people as loads of us use sidecars.&lt;/p&gt;

&lt;p&gt;The new sidecars are basically init containers that continue to run and are set to restart always. They start before the "main" container and end after it so that they will capture data like logs and metrics for the primary container’s entire lifecycle.&lt;/p&gt;

&lt;p&gt;The feature is alpha in Kubernetes 1.28 and will be beta in 1.29. It’s super useful work from the SIG Node team. Big thanks to them.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Releasing Kubernetes and Beyond: Flexible and Fast Delivery of Packages - Grace Nguyen, University of Waterloo; Adolfo Garcia Veytia, Chainguard; John Anderson, Ditto&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The final talk I saw at KubeCon Chicago was in the last slot of the day, and it was a presentation by members of the SIG Release team.&lt;/p&gt;

&lt;p&gt;If you've read my past wrapup posts, you know that I have a lot of love for SIG Release. They do a lot for the Kubernetes project, and it's all very much in the "chop wood, carry water" vein. Release engineering tends not to get much attention until something goes wrong, and it's very challenging work. So I really appreciate the folks on this team.&lt;/p&gt;

&lt;p&gt;The talk covered a lot of things that are happening with SIG Release. The team is still moving from the original Google infrastructure to the new infra for the project. It's great that Google donated so much, but the project doesn't want to be too dependent on one company. The team has also been working with SIG Docs on a release checklist.&lt;/p&gt;

&lt;p&gt;If you want to get involved with SIG Release, they have a program where people can shadow the current members to learn. You can find &lt;a href="https://github.com/kubernetes/sig-release/blob/master/release-team/shadows.md"&gt;more info here&lt;/a&gt;. From what I've heard, it's a great program.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;That was it for my KubeCon Chicago. Overall, I had a great time and am glad I could attend. I would have loved to see more of Chicago (the only pizza I had was Detroit-style), but hopefully I can make it back there.&lt;/p&gt;

&lt;p&gt;I think I enjoyed the April event in Amsterdam a bit more. It feels to me like the US events are not bouncing back from the pandemic as well as the European ones. San Diego, the last KubeCon before the pandemic, had something like 15,000 attendees. I heard that Chicago was more like 8,000 to 9,000 registered, but the crowd didn’t feel that big.&lt;/p&gt;

&lt;p&gt;There are additional reasons for this besides COVID-19, like companies cutting travel budgets. I know some people who had to travel to Detroit last year on their own dimes, and that may still have been the case.&lt;/p&gt;

&lt;p&gt;But even a KubeCon that's a bit smaller is a fantastic time for me. I got to see so many friends from the community and learn some things, too.&lt;/p&gt;

&lt;p&gt;I don't know if I'll be in Paris next spring. It will depend a lot on where I'm working. But I should be able to attend the next North American event in Salt Lake City.&lt;/p&gt;

</description>
      <category>kubecon</category>
      <category>kubernetes</category>
      <category>ebpf</category>
      <category>cilium</category>
    </item>
    <item>
      <title>Managing Access to Kubernetes Clusters for Engineering Teams</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Mon, 07 Feb 2022 17:33:50 +0000</pubDate>
      <link>https://dev.to/loft/managing-access-to-kubernetes-clusters-for-engineering-teams-1dai</link>
      <guid>https://dev.to/loft/managing-access-to-kubernetes-clusters-for-engineering-teams-1dai</guid>
      <description>&lt;p&gt;by Daniel Olaogun&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; is a container orchestration tool for managing, deploying, and scaling containerized applications. It helps engineering teams deploy and manage applications across multiple servers with fewer complexities. Some of its best-known features include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#how-a-replicationcontroller-works" rel="noopener noreferrer"&gt;Self-healing&lt;/a&gt;, which automatically restarts your application container if it crashes&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noopener noreferrer"&gt;Horizontal scaling&lt;/a&gt; your application up or down as the traffic load increases or decreases&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Automatic rollouts and rollbacks&lt;/a&gt; for gradually deploying an updated version of your application or quickly rolling back to the previous version if you detect an issue &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Kubernetes, your containerized application is abstracted by a pod that can be replicated in a node or across many nodes. Nodes run your containerized applications and can contain one or multiple pods depending on the node resources. A set of nodes is called a cluster. &lt;/p&gt;

&lt;p&gt;As your Kubernetes cluster grows, you may need help managing it. This means you’ll need the ability to add users to your cluster and provide the required permissions, among other tasks. In this article, you’ll learn how to manage access to your Kubernetes cluster as well as how to manage the users who are given this access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why You Need Kubernetes Clusters
&lt;/h2&gt;

&lt;p&gt;A Kubernetes cluster contains control plane and worker nodes that work together to handle and distribute traffic from within and outside the cluster. Worker nodes are a set of virtual or physical machines that run your containerized applications, while the control plane node controls the worker nodes. The control plane node manages and maintains the desired state of the cluster, scheduling pods to worker nodes based on available resources. It also provides the API endpoint that users interact with.&lt;/p&gt;

&lt;p&gt;The following are some important use cases for Kubernetes clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioning of Multiple Environments
&lt;/h3&gt;

&lt;p&gt;During the development and release of your application, you need environments for development and testing as well as a separate production environment for the application release. A Kubernetes cluster provides these environments using &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noopener noreferrer"&gt;namespaces&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Previously, managing multiple environments for a single application meant spinning up separate virtual machines for each environment. This was tedious, and managing the differences between the environments could be difficult. Kubernetes clusters simplify the process. &lt;/p&gt;

&lt;h3&gt;
  
  
  Running Multiple Deployments
&lt;/h3&gt;

&lt;p&gt;Kubernetes clusters allow you to run multiple deployments of your applications, for example development, testing, and production deployments of the same application, in one cluster. You can also deploy multiple applications with different functionalities. However, these deployments will share the resources provided in the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easy Scaling of Deployments
&lt;/h3&gt;

&lt;p&gt;As your application traffic increases, the application consumes more resources. Increasing its resources ensures that the increased traffic won’t cause downtime. With Kubernetes, you can scale your application deployments by replicating them on multiple nodes in your cluster. This distributes the incoming traffic load across all the nodes running your application in the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Access in Your Kubernetes Cluster
&lt;/h2&gt;

&lt;p&gt;As your organization grows and the number of deployments in your Kubernetes cluster skyrockets, you need more users to help manage the cluster. However, you should also ensure that you can effectively manage the access of those users.&lt;/p&gt;

&lt;p&gt;Access to the Kubernetes API is managed through authentication, authorization, and admission control. When a user makes a request through the API using a client such as kubectl, Kubernetes checks the user’s authenticity. If the user can’t be verified, the request is rejected; if Kubernetes can verify the user, the request moves to authorization, which confirms that the user has the required permission to initiate this request. If they do not, the request is rejected, but if the user has the right permissions, then the request moves to the final verification of admission control. Admission control ensures that the request is good (such as verifying that the container image you want to deploy is secure).&lt;/p&gt;

&lt;h3&gt;
  
  
  Granting Users Access
&lt;/h3&gt;

&lt;p&gt;Other users must be recognized by Kubernetes in order to connect with your cluster. However, Kubernetes does not provide the ability to manage users out of the box. Users are generally managed outside of Kubernetes using services like Microsoft Active Directory, Okta OpenID Connect, AWS Identity and Access Management, and &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Managed Kubernetes services such as &lt;a href="https://docs.microsoft.com/en-us/azure/aks/managed-aad" rel="noopener noreferrer"&gt;Azure Kubernetes Service&lt;/a&gt; (AKS), &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="noopener noreferrer"&gt;Amazon Elastic Kubernetes Service&lt;/a&gt; (Amazon EKS), and &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="noopener noreferrer"&gt;Google Kubernetes Engine&lt;/a&gt; (GKE) incorporate their identity management system with Kubernetes services to manage and authenticate users.&lt;/p&gt;

&lt;p&gt;If you have a self-managed Kubernetes cluster, there are multiple services available to manage user access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://loft.sh/docs/getting-started/single-sign-on" rel="noopener noreferrer"&gt;Single sign-on&lt;/a&gt; with &lt;a href="https://loft.sh" rel="noopener noreferrer"&gt;Loft&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.okta.com/blog/2021/11/08/k8s-api-server-oidc" rel="noopener noreferrer"&gt;OpenID Connect&lt;/a&gt; with &lt;a href="https://www.okta.com/" rel="noopener noreferrer"&gt;Okta&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dexidp.io/docs/kubernetes/" rel="noopener noreferrer"&gt;Kubernetes authentication&lt;/a&gt; through &lt;a href="https://dexidp.io/" rel="noopener noreferrer"&gt;Dex&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/swlh/how-we-effectively-managed-access-to-our-kubernetes-cluster-38821cf24d57" rel="noopener noreferrer"&gt;Access management&lt;/a&gt; using &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl" rel="noopener noreferrer"&gt;OpenSSL&lt;/a&gt;; note that you should send the &lt;code&gt;kube-config&lt;/code&gt; file in a secure and encrypted manner to your authenticated users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also use the above options to handle user access for managed Kubernetes services. The Kubernetes documentation provides other &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="noopener noreferrer"&gt;authentication strategies&lt;/a&gt; for authenticating users in your cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Access
&lt;/h3&gt;

&lt;p&gt;Once users have been granted access to your Kubernetes cluster, there are several strategies to best manage that access.&lt;/p&gt;

&lt;h4&gt;
  
  
  Providing Only Needed Permissions
&lt;/h4&gt;

&lt;p&gt;Once you have successfully authenticated the users required in your cluster, give them just enough permissions to perform their duties. Depending on your team structure, it is not always a good idea for all users to have the same high-level access. Otherwise, a user might perform an operation without understanding its consequences, or you might lose access to your admin rights because your user privileges were changed by a malicious user.&lt;/p&gt;

&lt;p&gt;There are different &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules" rel="noopener noreferrer"&gt;authorization modes&lt;/a&gt; in Kubernetes used for access control, including role-based access control (RBAC), attribute-based access control (ABAC), node, and webhook. However, RBAC is commonly used to implement user roles and permissions. For more information on how to implement RBAC in your Kubernetes cluster, check out the &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Decommissioning Users as Needed
&lt;/h4&gt;

&lt;p&gt;When a user is no longer a part of your cluster team, you should delete the user from the cluster. This prevents the user from continuing to access the cluster and performing unauthorized activities.&lt;/p&gt;

&lt;p&gt;Most of the user management platforms noted above allow you to remove users with ease. &lt;/p&gt;

&lt;h4&gt;
  
  
  Enabling Auditing
&lt;/h4&gt;

&lt;p&gt;Kubernetes auditing logs all actions performed in your cluster sequentially. Auditing your cluster gives you this information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happened?&lt;/li&gt;
&lt;li&gt;When did it happen?&lt;/li&gt;
&lt;li&gt;Who initiated it?&lt;/li&gt;
&lt;li&gt;On what did it happen?&lt;/li&gt;
&lt;li&gt;Where was it observed?&lt;/li&gt;
&lt;li&gt;From where was it initiated?&lt;/li&gt;
&lt;li&gt;Where was it going?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When something unexpected happens in your Kubernetes cluster, the logs generated by the audit will guide you in getting to the root cause.&lt;/p&gt;

&lt;p&gt;Kubernetes does not enable auditing by default; you must enable it yourself. Doing so is up to you and your team, but for the security of your Kubernetes cluster, &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="noopener noreferrer"&gt;enabling auditing&lt;/a&gt; is recommended.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Loft for Access Control
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft&lt;/a&gt; is a platform built on top of Kubernetes that adds multitenancy and self-service capabilities, enabling you to control and manage access in your Kubernetes cluster. As previously noted, you can integrate Loft into your cluster to handle authentication and access control. Loft also integrates with &lt;a href="https://loft.sh/docs/getting-started/single-sign-on" rel="noopener noreferrer"&gt;many single sign-on (SSO) providers&lt;/a&gt; that you can use with your cluster.&lt;/p&gt;

&lt;p&gt;In Kubernetes, non-admin users don’t have the privileges to list, create, or delete namespaces in a shared cluster. However, Loft offers a feature called &lt;a href="https://loft.sh/docs/spaces/spaces" rel="noopener noreferrer"&gt;spaces&lt;/a&gt;, a virtual abstraction of a Kubernetes namespace. Once a space is created, a corresponding namespace is created, and if the space is deleted, the namespace is deleted as well.&lt;/p&gt;

&lt;p&gt;Loft also offers an auditing section similar to Kubernetes auditing, which records all operations and actions performed by users and applications using the Loft API in your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;p&gt;Remember that when configuring access in your Kubernetes cluster, there are some best practices you should follow: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Follow the principle of least privilege&lt;/li&gt;
&lt;li&gt;Enable auditing in your cluster&lt;/li&gt;
&lt;li&gt;Routinely check roles and permissions assigned to users&lt;/li&gt;
&lt;li&gt;Remove users that are no longer relevant in your cluster&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As you’ve learned, you have multiple options for granting and managing user access in your Kubernetes cluster, whether your cluster is self-managed or managed by a cloud provider. It’s important that you provide the right level of access to your different users and revoke that access when necessary. This way, you can ensure that your cluster is safe from misuse as you scale up your Kubernetes workflow.&lt;/p&gt;

&lt;p&gt;If you need a third-party solution for managing access control and self-service in your Kubernetes cluster, consider &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft&lt;/a&gt;. It integrates well with cloud-native tools and can be used with kubectl or GitOps. Loft is easy to implement and offers several cost optimization features. You can &lt;a href="https://loft.sh/demo/request" rel="noopener noreferrer"&gt;request a demo&lt;/a&gt; to see what Loft can do for you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@mattseymour?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Matt Seymour&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/walls?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Cost Monitoring with Prometheus &amp; Grafana</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Wed, 06 Oct 2021 18:37:07 +0000</pubDate>
      <link>https://dev.to/richburroughs/kubernetes-crds-custom-resource-definitions-4e76</link>
      <guid>https://dev.to/richburroughs/kubernetes-crds-custom-resource-definitions-4e76</guid>
      <description>&lt;p&gt;by Cameron Pavey&lt;/p&gt;

&lt;p&gt;Regardless of the infrastructure you are running, it is always important to keep an eye on your costs. There have been enough horror stories of cloud billing getting out of control that teams should have some measures in place to keep an eye on the usage of these resources to avoid surprises. Beyond this, there are other benefits to monitoring the usage and cost of your infrastructure. Having access to this extra data might be informative further down the line when making decisions about upgrading or scaling your infrastructure.&lt;/p&gt;

&lt;p&gt;In this article, you will learn how to set up Grafana and Prometheus to monitor your Kubernetes cluster. Like Kubernetes, Prometheus is a graduate of the Cloud Native Computing Foundation. It is an open-source monitoring system that integrates with many other tools and systems to collect data. Grafana, another open-source project, acts as a dashboard and visualizer for various data sources, including Prometheus, for which it boasts first-class support. With these two tools, you should be able to glean some helpful insights about the usage of your cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Prometheus Operator
&lt;/h2&gt;

&lt;p&gt;While it is possible to install and manage Prometheus and Grafana independently as standalone applications and connect them after the fact, there is quite a lot of configuration boilerplate involved, which can all be abstracted away by using Prometheus Operator. Specifically, for this guide, you can use &lt;code&gt;kube-prometheus-stack&lt;/code&gt;, a Helm chart that handles setting up Prometheus Operator, as well as Grafana. This Helm chart will give you a functional monitoring stack with minimal configuration required, making it an excellent way to experiment with these tools. It is also suitable for setting them up for a long-term deployment if you do not have any existing components for your monitoring stack.&lt;/p&gt;

&lt;p&gt;Before getting started, make sure you have the prerequisites installed. If you are using &lt;code&gt;kube-prometheus-stack&lt;/code&gt;, all you will need is a Kubernetes cluster and &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Helm 3&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It’s recommended to use a non-default namespace for these resources to make things easier to manage further down the line. You can create a new namespace like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The chart you are going to install exists within the &lt;code&gt;prometheus-community&lt;/code&gt; repo, so you’ll need to add that before you can install the chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, to see what charts are available in this newly added repo, you can use the &lt;code&gt;search&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo prometheus-community
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will give some output like so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvisus3khwkhwaav3s1wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvisus3khwkhwaav3s1wg.png" alt="Prometheus Community repo results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The one that you will want to install is &lt;code&gt;kube-prometheus-stack&lt;/code&gt;. You can install that in the namespace you just created with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus-stack &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring prometheus-community/kube-prometheus-stack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will run for a bit while it sets up all the required resources, but once the command completes, you can verify everything is present by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pod &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should produce some output like so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2tk88kg13jucy0rtg8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2tk88kg13jucy0rtg8g.png" alt="Prometheus Pods running"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have a monitoring stack running on your cluster. As you can see, there are multiple components required for everything to work, and it would take quite a bit of manual configuration to get to this same point without using the Helm chart. From this output, the two Pods which you will want to take a look at are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;prometheus-stack-grafana-*&lt;/code&gt; - the Grafana dashboard&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prometheus-prometheus-kube-prometheus-prometheus-0&lt;/code&gt; - the Prometheus instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, there is no ingress configured for these Pods, but to get going quickly, you can port-forward traffic from the machine you are running &lt;code&gt;kubectl&lt;/code&gt; on, allowing you to access the Pods. For Grafana, you will need to port-forward port 3000, and for Prometheus, it will be 9090. You can do this with the following commands (which are long-lived, so you will need to run them in separate terminal panes/windows):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward &amp;lt;prometheus-pod&amp;gt; 9090 &lt;span class="nt"&gt;--address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0
kubectl port-forward &amp;lt;grafana-pod&amp;gt; 3000 &lt;span class="nt"&gt;--address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With these two commands running, you can access Prometheus and Grafana by navigating to the associated port on the kubectl client machine. This is fine for exploratory purposes, but it would be a good idea to use proper ingresses to expose these services for a long-term deployment.&lt;/p&gt;

&lt;p&gt;You should see the Prometheus web interface if you navigate to port 9090 on your kubectl client machine. From the nav menu at the top, select &lt;strong&gt;status &amp;gt; TSDB status&lt;/strong&gt;. From this page, you can see an overview of the data stored within Prometheus’ DB. This is a good way to quickly check that things are working as expected because if Prometheus is misconfigured, there will likely be no data here.  If it is working, it will quickly be populated with quite a lot of data, as you can see here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohoqcry9im6h5jxfxkx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohoqcry9im6h5jxfxkx3.png" alt="TSDB status"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, navigate to port 3000, and your browser should present you with the Grafana login screen. When installed as described above, the default username and password should be “admin” and “prom-operator”.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kube-prometheus-stack&lt;/code&gt; includes several pre-configured dashboards in its Grafana instance, which is a nice bonus, as you can see quite a lot of detailed information about your cluster straight out of the box. Once you are logged in, from the left-hand menu, select &lt;strong&gt;Dashboards &amp;gt; Manage&lt;/strong&gt; to see the preconfigured dashboards.&lt;/p&gt;

&lt;p&gt;Now that your Grafana instance has access to your cluster’s metrics, you can use this data to gather insights about your Kubernetes usage, and subsequently, its cost. None of the preconfigured dashboards are specifically about cost, but you can create your own with some effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Your Cost
&lt;/h2&gt;

&lt;p&gt;This step will likely take a bit of trial and error to find the right combination of metrics to track and the right formula to get the insights you are after. To get some quick and easy results, you can start by aggregating basic resource usage metrics for CPU, RAM, and storage. These values can then be multiplied by whatever your hourly rate is for the resource in question. For example, if you are using Google Cloud P​​latform’s GKE service, pricing information can be &lt;a href="https://cloud.google.com/kubernetes-engine/pricing" rel="noopener noreferrer"&gt;found here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The preconfigured dashboards that come with &lt;code&gt;kube-prometheus-stack&lt;/code&gt; are worth looking at because they will give you some context for getting meaningful insights out of your data. For example, by looking at the included “Kubernetes / Compute Resources / Cluster” dashboard, you can see how CPU usage is calculated using a metric called &lt;code&gt;container_cpu_usage_seconds_total&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create a new dashboard by selecting the &lt;strong&gt;Create &amp;gt; Dashboard&lt;/strong&gt; option in the left-hand menu. Next, you can add a new panel and enter the following PromQL in the provided field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(sum(container_cpu_usage_seconds_total) / 60 / 60 ) * 0.0445
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should aggregate the CPU usage of your containers and convert it to hours, which can then be multiplied by the CPU/hr cost from GCP to get something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow1bz323q5vutjkd829q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow1bz323q5vutjkd829q.png" alt="CPU usage chart"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because this is a new cluster with a minimal workload, the cost is still low, but you can see that it steadily increases at a consistent rate. Using this approach, you can experiment with the data available to you and build a collection of charts to monitor your cluster costs. &lt;/p&gt;

&lt;p&gt;This approach has some benefits and limitations. On the plus side, all the tools required are free and easy enough to set up. You retain complete control of your data and can extend it however you like to create your ideal cost-monitoring system. The obvious drawback is that a fair bit of manual effort is involved (and even more so if you opt to set up Prometheus and Grafana manually instead of with the Helm chart). Furthermore, the quality of the insights you get will be directly linked to how much time and effort you invest in configuring and optimizing your dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative Options
&lt;/h2&gt;

&lt;p&gt;There are some alternatives and additions which you should consider when setting all this up. Just as the Helm chart saves a lot of effort when setting up Prometheus, there are some pre-built solutions for monitoring resource usage costs with these tools. One excellent option is &lt;a href="https://medium.com/kubecost/effectively-managing-kubernetes-with-cost-monitoring-96b54464e419" rel="noopener noreferrer"&gt;Kubecost&lt;/a&gt;, which offers Grafana dashboards built specifically for this purpose. The dashboards come preconfigured with an opinionated setup for monitoring cluster cost with GCP, so some tweaking may be required to get it working with your specific setup, but it is certainly worth taking a look at to see how they set things up.&lt;/p&gt;

&lt;p&gt;When it comes to managing your resource usage cost, monitoring is only half the puzzle. To actually get some benefit out of it, you need to have an actionable plan based on your data. This is where services like &lt;a href="https://loft.sh" rel="noopener noreferrer"&gt;Loft.sh&lt;/a&gt; come in. Loft’s Kubernetes platform has features to help manage your resource costs. Of particular note is &lt;a href="https://loft.sh/docs/self-service/sleep-mode" rel="noopener noreferrer"&gt;sleep mode&lt;/a&gt;, which can scale down your non-prod virtual clusters when not in use to save resources and, therefore, money.&lt;/p&gt;

&lt;p&gt;There are countless options available for setting up cost monitoring for your cluster. There is generally something for everyone, depending on how much time and effort you want to invest in your monitoring solution up-front. One of the nice things about software is that it is generally easy to change, so it very well could be worth your while to implement a low-effort solution with preconfigured tools like Prometheus Operator and Kubecost to determine if there is value there for you. From this evaluation, you will be in a better position to make long-term decisions about the direction you want to take for your cost monitoring and Kubernetes resources.&lt;/p&gt;

</description>
      <category>kubernetesinsights</category>
      <category>guides</category>
      <category>platformengineering</category>
      <category>usecases</category>
    </item>
    <item>
      <title>Kubernetes Network Policies: A Practitioner's Guide</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Thu, 09 Sep 2021 19:12:27 +0000</pubDate>
      <link>https://dev.to/loft/kubernetes-network-policies-a-practitioner-s-guide-1bfg</link>
      <guid>https://dev.to/loft/kubernetes-network-policies-a-practitioner-s-guide-1bfg</guid>
      <description>&lt;p&gt;by Levent Ogut&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn05b5ybq1wus8p2fyn2b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn05b5ybq1wus8p2fyn2b.jpg" alt="Picture of a set of stairs leading up through a series of Japanese gates" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Providing security for our infrastructure and applications is a never-ending continuous process. This article will talk about security in Kubernetes clusters, traffic incoming and outgoing to/from the cluster, and the traffic within the cluster. Some organizations behave as if their own workloads can be malicious and design their security policies accordingly. In addition, in today's world, we all use third-party plugins, libraries, and pieces of code from external resources. Although this has been increasing productivity, this also brings many security concerns. Isolating the traffic incoming and outgoing to our applications to only what’s absolutely necessary is one of the best approaches there is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Need Network Policies
&lt;/h2&gt;

&lt;p&gt;It is of paramount importance to secure the traffic in our clusters. By default, all pods can talk to all pods with no restriction. NetworkPolicy resource allows us to restrict the ingress and egress traffic to/from pods. For example, it provides the means to restrict the ingress traffic of a database pod to only backend pods, and it can restrict ingress traffic of a backend pod's traffic to a frontend application only. This way, we can secure our resources so that only legitimate traffic is allowed to/from the applications. An example would be limiting traffic so that our frontend pods can only connect to the backend application, so that an attacker who compromises the front end can’t directly access the database or any other pods.&lt;/p&gt;

&lt;p&gt;The functionality of controlling traffic is typically achieved in networks by using firewalls (software or hardware). Here in Kubernetes that functionality is implemented by the network plugins and controlled by network policies. Note that network policies are not a replacement for firewalls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fk6iyacikkm5panp197.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fk6iyacikkm5panp197.png" alt="Network policy example" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements for Implementing Network Policies
&lt;/h2&gt;

&lt;p&gt;Kubernetes provides networking functionality by using network plugins. Unless you have a network plugin that can implement network policies, you will not be able to use the functionality. Please note that even if the API server accepts the network policy configuration, this doesn't mean that it will be implemented unless a controller understands and implements the policy. Several network plugins support network policies and much more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Plugins
&lt;/h2&gt;

&lt;p&gt;There are two types of network plugins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CNI&lt;/li&gt;
&lt;li&gt;Kubenet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CNI type plugins follow the &lt;a href="https://github.com/containernetworking/cni" rel="noopener noreferrer"&gt;Container Network Interface&lt;/a&gt; spec and are used by the community to create advanced featured plugins. On the other hand, &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet" rel="noopener noreferrer"&gt;Kubenet&lt;/a&gt; utilizes bridge and host-local CNI plugins and has basic features.&lt;/p&gt;

&lt;p&gt;Several network plugins were developed from various organizations, including but not limited to &lt;a href="https://www.tigera.io/project-calico/" rel="noopener noreferrer"&gt;Calico&lt;/a&gt;, &lt;a href="https://cilium.io/" rel="noopener noreferrer"&gt;Cilium&lt;/a&gt;, and &lt;a href="https://github.com/cloudnativelabs/kube-router" rel="noopener noreferrer"&gt;Kube-Router&lt;/a&gt;. A complete list can be found in &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="noopener noreferrer"&gt;Cluster Networking&lt;/a&gt; documentation. These network plugins provide Network Policy implementation and more, such as advanced monitoring, L7 filtering, integration to cloud networks, etc.&lt;/p&gt;

&lt;p&gt;While some network plugins use &lt;a href="https://www.netfilter.org/" rel="noopener noreferrer"&gt;Netfilter/iptables&lt;/a&gt; in their underlying infrastructure, others use &lt;a href="https://ebpf.io/" rel="noopener noreferrer"&gt;eBPF&lt;/a&gt; technology on the underlying data path. &lt;a href="https://www.netfilter.org/" rel="noopener noreferrer"&gt;Netfilter/iptables&lt;/a&gt; is very mature and builtin into the kernel. On the other hand, eBPF allows you to change the functionality on the fly without kernel upgrade. Not being dependent on the kernel version has led some big players to use eBPF based network plugins on very large scales.&lt;/p&gt;

&lt;p&gt;It is imperative to select the correct network plugin for your Kubernetes cluster(s). If you are using cloud providers for your Kubernetes setup (such as AWS, Azure, GCP), they might already have deployed a network plugin that supports network policies. Please check the cloud provider documentation for further details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing &amp;amp; Applying Network Policies
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Isolation
&lt;/h3&gt;

&lt;p&gt;In a Kubernetes cluster, by default, all pods are non-isolated, meaning all ingress and egress traffic is allowed. Once a network policy is applied and has a matching selector, the pod becomes isolated, meaning the pod will reject all traffic that is not permitted by the aggregate of the network policies applied. The order of the policies is not important; an aggregate of the policies is applied.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Policy Resource Fields
&lt;/h3&gt;

&lt;p&gt;Fields to define when writing network policies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;podSelector&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;  &lt;code&gt;podSelector&lt;/code&gt; selects a group of pods using a LabelSelector. If empty, it would select all pods in the namespace, so beware when using it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;policyTypes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;  &lt;code&gt;policyTypes&lt;/code&gt; lists the type of rules that network policy includes. Value can be ingress, egress, or both.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#networkpolicyingressrule-v1-networking-k8s-io" rel="noopener noreferrer"&gt;ingress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;  &lt;code&gt;ingress&lt;/code&gt; defines the rules that will be applied to the ingress traffic of the selected pod(s). If it is empty, it matches all the ingress traffic. If it is absent, it doesn't affect ingress traffic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#networkpolicyegressrule-v1-networking-k8s-io" rel="noopener noreferrer"&gt;egress&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;  &lt;code&gt;egress&lt;/code&gt; defines the rules that will be applied to the egress traffic of the selected pod(s). If it is empty, it matches all the egress traffic. If it is absent, it doesn't affect egress traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Egress Rules
&lt;/h3&gt;

&lt;p&gt;An array of rules that would be applied to the traffic going out of the pod. It is defined with the following fields.&lt;/p&gt;

&lt;p&gt;Fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ports: an array of &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#networkpolicyport-v1-networking-k8s-io" rel="noopener noreferrer"&gt;NetworkPolicyPort&lt;/a&gt; (port, endport, protocol)&lt;/li&gt;
&lt;li&gt;to: an array of &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#networkpolicypeer-v1-networking-k8s-io" rel="noopener noreferrer"&gt;NetworkPolicyPeer&lt;/a&gt; (ipBlock, namespaceSelector, podSelector)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ingress Rules
&lt;/h3&gt;

&lt;p&gt;An array of rules that would be applied to the traffic coming into the pod. It is defined with the following fields.&lt;/p&gt;

&lt;p&gt;Fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;from: an array of &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#networkpolicypeer-v1-networking-k8s-io" rel="noopener noreferrer"&gt;NetworkPolicyPeer&lt;/a&gt; (ipBlock, namespaceSelector, podSelector)&lt;/li&gt;
&lt;li&gt;ports: an array of &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#networkpolicyport-v1-networking-k8s-io" rel="noopener noreferrer"&gt;NetworkPolicyPort&lt;/a&gt; (port, endport, protocol)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Walkthrough
&lt;/h2&gt;

&lt;p&gt;Let's do a walkthrough of a network policy defined as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;network-policy-walkthrough-db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database&lt;/span&gt;
  &lt;span class="na"&gt;policyTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
  &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ipBlock&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.1.2/32&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;team&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dba&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This rule applies to all pods that have labels with component key and database value (component=database). Network policy affects only ingress traffic as defined in &lt;code&gt;policyTypes\&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The tree ingress rule entries are evaluated with OR. Let's look at how Kubernetes interpreted the configuration using &lt;code&gt;describe\&lt;/code&gt; subcommand:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe networkpolicy network-policy-walkthrough-db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Name:         network-policy-walkthrough-db
Namespace:    default
Created on:   2021-08-30 18:06:48 +0200 CEST
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
Spec:
  PodSelector:     &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;database
  Allowing ingress traffic:
    To Port: 5432/TCP
    From:
      IPBlock:
        CIDR: 192.168.1.2/32
        Except: 
    From:
      NamespaceSelector: &lt;span class="nv"&gt;team&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dba
    From:
      PodSelector: &lt;span class="nv"&gt;component&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;backend
  Not affecting egress traffic
  Policy Types: Ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Host with IP "192.168.1.2", all pods in a namespace that have &lt;code&gt;team&lt;/code&gt; label set to &lt;code&gt;dba&lt;/code&gt; and all pods in the same namespace that has label &lt;code&gt;component&lt;/code&gt; set to &lt;code&gt;backend&lt;/code&gt; are allowed to reach on port 5432.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Default Deny Ingress
&lt;/h3&gt;

&lt;p&gt;An all deny ingress rule with an empty podSelector (selecting all pods in the namespace) is a good starting point for a fresh cluster. You can then explicitly allow required traffic. As the podSelector is empty, it will continue to match new pods when they arrive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default-deny-ingress-policy&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;policyTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe networkpolicies default-deny-ingress-policy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name:         default-deny-ingress-policy
Namespace:    default
Created on:   2021-08-28 16:47:33 +0200 CEST
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
Spec:
  PodSelector:     &amp;lt;none&amp;gt; (Allowing the specific traffic to all pods in this namespace)
  Allowing ingress traffic:
    &amp;lt;none&amp;gt; (Selected pods are isolated for ingress connectivity)
  Not affecting egress traffic
  Policy Types: Ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, Kubernetes interpreted our configuration as intended. All pods in the namespace are now isolated, no ingress traffic is allowed to the pods, and egress traffic is not affected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Allow Access to a Group of Pods from Another Namespace
&lt;/h3&gt;

&lt;p&gt;In this example, we will look at a network policy that allows debugging pods to connect to the application pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NetworkPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;allow-debug&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
  &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;podSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
      &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;space&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring&lt;/span&gt;
  &lt;span class="na"&gt;policyTypes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that here we have a single from rule. If we had put the namespaceSelector into its own rule, the meaning would change drastically; this is where podSelector and namespaceSelector are used together.&lt;/p&gt;

&lt;p&gt;Let's check how Kubernetes interpreted the policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe networkpolicy allow-debug
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name:         allow-debug
Namespace:    default
Created on:   2021-08-30 22:36:48 +0200 CEST
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
Spec:
  PodSelector:     component=app
  Allowing ingress traffic:
    To Port: &amp;lt;any&amp;gt; (traffic allowed to all ports)
    From:
      NamespaceSelector: space=monitoring
      PodSelector: component=debug
  Not affecting egress traffic
  Policy Types: Ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we only allow ingress traffic from pods with a label &lt;code&gt;component&lt;/code&gt; set to &lt;code&gt;debug&lt;/code&gt; in the namespaces with the label &lt;code&gt;space&lt;/code&gt; set to &lt;code&gt;monitoring&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Network Policies
&lt;/h2&gt;

&lt;p&gt;Monitoring the network policies and their behavior is an essential part of the deployment. Kubernetes offers the &lt;code&gt;kubectl describe networkpolicy &amp;lt;NETWORK_POLICY_NAME&amp;gt;&lt;/code&gt; command to see how Kubernetes interpreted the network policy configuration. For detailed analysis, check out the network plugin's tools. Here we have a Kubernetes cluster with Cilium network plugin. Cilium offers a CLI tool, and from there, we can monitor the packets.&lt;/p&gt;

&lt;p&gt;Let's get the IP address of our pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-deployment-66b6c48dd5-frsv9   1/1     Running   0          24m   10.0.0.136   valhalla   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's get the endpoint id (in Cilium) of the pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;cilium endpoint list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])  IPv6   IPv4         STATUS    ENFORCEMENT        ENFORCEMENT                                                                                                                   
5          Enabled            Disabled          50873      k8s:app=nginx                       10.0.0.136   ready   
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will monitor all traffic that goes to and comes from the endpoint with id 5.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Press Ctrl-C to quit

level=info msg="Initializing dissection cache..." subsys=monitor
Policy verdict log: flow 0xf9da54c5 local EP ID 5, remote ID 1, dst port 80, proto 6, ingress true, action allow, match L3-Only, 10.0.0.147:39772 -&amp;gt; 10.0.0.136:80 tcp SYN
-&amp;gt; endpoint 5 flow 0xf9da54c5 identity 1-&amp;gt;50873 state new ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -&amp;gt; 10.0.0.136:80 tcp SYN
-&amp;gt; stack flow 0xbbd5210b identity 50873-&amp;gt;1 state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.136:80 -&amp;gt; 10.0.0.147:39772 tcp SYN, ACK
-&amp;gt; endpoint 5 flow 0xf9da54c5 identity 1-&amp;gt;50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -&amp;gt; 10.0.0.136:80 tcp ACK
-&amp;gt; endpoint 5 flow 0xf9da54c5 identity 1-&amp;gt;50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -&amp;gt; 10.0.0.136:80 tcp ACK
-&amp;gt; stack flow 0xbbd5210b identity 50873-&amp;gt;1 state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.136:80 -&amp;gt; 10.0.0.147:39772 tcp ACK
-&amp;gt; endpoint 5 flow 0xf9da54c5 identity 1-&amp;gt;50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -&amp;gt; 10.0.0.136:80 tcp ACK, FIN
-&amp;gt; stack flow 0xbbd5210b identity 50873-&amp;gt;1 state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.136:80 -&amp;gt; 10.0.0.147:39772 tcp ACK, FIN
-&amp;gt; endpoint 5 flow 0xf9da54c5 identity 1-&amp;gt;50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -&amp;gt; 10.0.0.136:80 tcp ACK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the traffic destined to our NGINX pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Policy verdict log: flow 0xf9da54c5 local EP ID 5, remote ID 1, dst port 80, proto 6, ingress true, action allow, match L3-Only, 10.0.0.147:39772 -&amp;gt; 10.0.0.136:80 tcp SYN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the policy evaluation here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have explored why and how network policies are used within a Kubernetes cluster. Allowing only the required traffic is a security best practice, and Kubernetes allows us to implement this via declarative configuration of network policies. Since network policies depend heavily on the labels of pods/namespaces, it is straightforward to deploy rules that would also capture newly created resources. &lt;/p&gt;

&lt;p&gt;It is highly recommended to test the network policies before applying them.&lt;/p&gt;

&lt;p&gt;Observing traffic sources, destinations, and flows is imperative; as Kubernetes API does not include statistics, learning how to use the monitoring/troubleshooting tools of the network plugin becomes very important.&lt;/p&gt;

&lt;p&gt;Folks at Cilium also developed a great &lt;a href="https://editor.cilium.io/" rel="noopener noreferrer"&gt;UI Network Policy editor&lt;/a&gt;; make sure to check it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/containernetworking/cni/blob/master/SPEC.md" rel="noopener noreferrer"&gt;CNI Spec&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="noopener noreferrer"&gt;Network Plugins&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#networkpolicy-v1-networking-k8s-io" rel="noopener noreferrer"&gt;Network Policy Spec&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="noopener noreferrer"&gt;Cluster Networking (Kubernetes Docs)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@marekpiwnicki?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Marek Piwnicki&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/gate?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Docker Compose Alternatives for Kubernetes: DevSpace</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Thu, 09 Sep 2021 19:00:00 +0000</pubDate>
      <link>https://dev.to/loft/docker-compose-alternatives-for-kubernetes-devspace-42ki</link>
      <guid>https://dev.to/loft/docker-compose-alternatives-for-kubernetes-devspace-42ki</guid>
      <description>&lt;p&gt;by Rich Burroughs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faee7dk3a8f1rl6felob6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faee7dk3a8f1rl6felob6.png" alt="DevSpace Logo" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this series, we’re looking at alternatives to using Docker Compose for building apps that run in Kubernetes clusters. While Compose is a handy way to stand up apps locally, there are advantages to running your apps in a Kubernetes environment while you develop. Your environment will be more like your production environment, and you can work with Kubernetes specific objects and manifests.&lt;/p&gt;

&lt;p&gt;Previously in this series we’ve covered &lt;a href="https://loft.sh/blog/docker-compose-alternatives-for-kubernetes-tilt/" rel="noopener noreferrer"&gt;Tilt&lt;/a&gt; and &lt;a href="https://loft.sh/blog/docker-compose-alternatives-for-kubernetes-skaffold/" rel="noopener noreferrer"&gt;Skaffold&lt;/a&gt;. Next up in the series is &lt;a href="https://devspace.sh/" rel="noopener noreferrer"&gt;DevSpace&lt;/a&gt;, one of the other popular open source tools in the space. For transparency’s sake, I work for Loft Labs, which maintains DevSpace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;DevSpace is client-only, and it has a lot of use cases for developing in Kubernetes. The client can be used to develop in local and remote Kubernetes clusters, build containers, and integrate nicely into your CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qgjggx3i6xe7h40w7a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qgjggx3i6xe7h40w7a9.png" alt="Diagram of the DevSpace architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DevSpace is configured with the devspace.yaml file and it supports port forwarding to connect to apps running in your cluster, as well as reverse port forwarding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Since DevSpace is a single Go binary, installation is very simple. I’m on a Mac, so I installed it with Homebrew:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;devspace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are other options in the &lt;a href="https://devspace.sh/cli/docs/getting-started/installation" rel="noopener noreferrer"&gt;installation instructions&lt;/a&gt;, like using npm, yarn, or just downloading the correct binary to your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initializing Your Project
&lt;/h2&gt;

&lt;p&gt;To set up your project to run with DevSpace, use the init subcommand:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The DevSpace client will ask a few questions about deploying and building your project and set up a basic devspace.yaml file based on your answers.&lt;/p&gt;

&lt;p&gt;There are several options for deploying your project with DevSpace, like using an existing Helm chart, Kubernetes manifests, existing Kustomize configuration files, or using the &lt;a href="https://devspace.sh/component-chart/docs/introduction" rel="noopener noreferrer"&gt;Component Helm Chart&lt;/a&gt;. This DevSpace feature builds a Helm chart on the fly based on your devspace.yaml file. If you want to try the &lt;a href="https://devspace.sh/cli/docs/quickstart" rel="noopener noreferrer"&gt;DevSpace quickstart&lt;/a&gt;, using the Component Helm Chart is recommended.&lt;/p&gt;

&lt;p&gt;For building your containers, you can use an existing Dockerfile or a custom build process. You can optionally ship the container images that are built to a registry. However, one of the most powerful features about DevSpace is that you can skip image building entirely and instead use the hot reloading which refreshes your running containers without having to rebuild the container image. We’ll explore this feature in more detail later on.&lt;/p&gt;

&lt;p&gt;After running &lt;code&gt;devspace init&lt;/code&gt;, you will have a devspace.yaml file to get started with, which you can further configure to suit your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Entering Development Mode
&lt;/h2&gt;

&lt;p&gt;To start developing with DevSpace, you first need to select the kube-context and namespace you will be working with. DevSpace makes this easy with the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace use context
&lt;span class="nv"&gt;$ &lt;/span&gt;devspace use namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6ypwyqls4riu55be1t7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6ypwyqls4riu55be1t7.gif" alt="Animated GIF showing the output of those two devspace commands" width="1680" height="895"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those two commands are convenient for people who work with multiple clusters and namespaces. Setting the namespace with &lt;code&gt;devspace use namespace&lt;/code&gt; means you don’t have to pass it in with kubectl commands. But if you don’t want to use the interactive menu, you can pass in the context name and namespace as additional arguments to those commands (like &lt;code&gt;devspace use namespace [NAMESPACE]&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once you’ve set your context and namespace, you enter development mode with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will get your project running in your cluster, as well as setting up any port forwarding that you’ve defined. At the end of the command, DevSpace will open a terminal similar to &lt;code&gt;kubectl exec&lt;/code&gt; and you can start your application inside the container but still be able to access the application using your browser on localhost due to port-forwarding running in the background.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hot Reloading For Containers
&lt;/h2&gt;

&lt;p&gt;One of the significant features that DevSpace offers is container hot reloading. This means that DevSpace can update your running app without building a new container image each time you make a change. When you save a change in your code using your local IDE, DevSpace will automatically sync the changed files to your container that’s already running and can even be configured to (rebuild and) restart your app inside the container to pick up the changes.&lt;/p&gt;

&lt;p&gt;Hot reloading can have a huge impact on developer productivity, as you can imagine. It provides faster feedback and improves cycle times, which also impacts developer satisfaction and happiness. You can try out hot reloading yourself using one of the &lt;a href="https://devspace.sh/cli/docs/quickstart" rel="noopener noreferrer"&gt;quickstart projects&lt;/a&gt;, or you can see an example in this short YouTube video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/AkMWoYv8gWg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Developing Microservices with Kubernetes
&lt;/h2&gt;

&lt;p&gt;When you’re working on a microservice, questions about how to handle service dependencies usually come up. If your service depends on APIs provided by an upstream service, do you need to run that other service too? And how do you manage it? This question often comes up in testing, too, regarding which dependencies to mock and which to run.&lt;/p&gt;

&lt;p&gt;DevSpace makes it easy to set up multiple apps in different Git repositories that you’d like to run alongside your app. You can spin up the entire environment, including your dependencies, by running &lt;code&gt;devspace dev&lt;/code&gt;. Docker Compose can’t work with multiple Git repos but with DevSpace you can set up a devspace.yaml in each Git repository and then &lt;a href="https://devspace.sh/cli/docs/configuration/dependencies/basics" rel="noopener noreferrer"&gt;add dependencies between the repositories&lt;/a&gt;, so DevSpace knows which services require each other when you start developing them. &lt;/p&gt;

&lt;p&gt;Defining a dependency from one repo to another one is pretty straightforward in devspace.yaml as shown in this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-server&lt;/span&gt; 
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;git&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/my-api-server&lt;/span&gt;
    &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stable&lt;/span&gt;
  &lt;span class="na"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Hooks and Custom Commands
&lt;/h2&gt;

&lt;p&gt;DevSpace offers a lot of extensibility, and two features that enable that are hooks and custom commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devspace.sh/cli/docs/configuration/hooks/basics" rel="noopener noreferrer"&gt;Hooks&lt;/a&gt; are actions that you want to occur in your build and deploy pipeline. Things you can do with hooks include executing commands, either on the local machine or in a container, uploading and downloading files, printing container logs, and more. Here’s an example of defining hooks in your devspace.yaml file, from the docs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# Execute the hook in a golang shell (cross operating system compatible)&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;before&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;image&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;building"&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;before&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
&lt;span class="c1"&gt;# Execute the hook in a golang shell (cross operating system compatible)&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;echo Hello&lt;/span&gt;
    &lt;span class="s"&gt;echo World&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;before&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-1,image-2&lt;/span&gt;
&lt;span class="c1"&gt;# Execute the hook directly on the system (echo binary must exist)&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo"&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;before&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;image&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;building"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;before&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-1,image-2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ability to chain actions together with hooks in your build/deploy processes is very powerful.&lt;/p&gt;

&lt;p&gt;But what if you want to do other repeatable actions while developing your project? That’s where &lt;a href="https://devspace.sh/cli/docs/configuration/commands/basics" rel="noopener noreferrer"&gt;custom commands&lt;/a&gt; come in. You can create custom commands to do all kinds of tasks and then execute them at any time with the &lt;code&gt;devspace run&lt;/code&gt; command, like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace run &lt;span class="o"&gt;[&lt;/span&gt;COMMAND NAME]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And remember how we talked about working with dependencies that are in other Git repos? You can run the custom commands for those dependencies that you’ve pulled in too. Let’s say you have a service with a database as dependency, one custom command could reset the database to a set of test data or it could execute a database migration, for example.&lt;/p&gt;

&lt;p&gt;Hooks and custom commands allow you to tailor your DevSpace set up to your specific projects and languages and define your dev workflow as code. This is powerful, as it creates a shared workflow for your team that’s self-documenting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Web UI For Kubernetes Development
&lt;/h2&gt;

&lt;p&gt;While many teams prefer to use the DevSpace CLI to do a lot of their work, a web UI is also available to get a quick overview of what is running inside the current namespace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7j47lfa3jcy7ysqziua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7j47lfa3jcy7ysqziua.png" alt="DevSpace UI" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the DevSpace UI, you can view the logs of your running containers, view your build and deployment configurations, and view the custom commands you’ve defined in devspace.yaml. There’s also a link to open a terminal to a running container. You can do that from the command line, too, by running &lt;code&gt;devspace enter&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DevSpace is a powerful tool that also is very flexible. People use it to automate even very complicated dev workflows, and hooks and custom commands allow you to fit it to your needs.&lt;/p&gt;

&lt;p&gt;We talked earlier about the idea of encoding your dev workflows as code, and this is a compelling concept. Think about it as Infrastructure as Code but for your development workflow. Your devspace.yaml file becomes the source of truth for how you develop, and having your workflow defined in code makes onboarding new team members easier. It’s self-documenting.&lt;/p&gt;

&lt;p&gt;All of the tools that we’ve looked at in this series are open source and free, and they all offer different benefits. If you’re developing apps that run in Kubernetes clusters, it would be worth your while to look at the tools in this space and find the one that works best for your team.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>workflows</category>
    </item>
    <item>
      <title>Loft Feature Spotlight: Sleep Mode</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Thu, 26 Aug 2021 16:25:57 +0000</pubDate>
      <link>https://dev.to/loft/loft-feature-spotlight-sleep-mode-4o65</link>
      <guid>https://dev.to/loft/loft-feature-spotlight-sleep-mode-4o65</guid>
      <description>&lt;p&gt;It can be challenging to manage costs if your developers use Kubernetes clusters running in the cloud, whether they use shared clusters or have their own dedicated clusters. It’s difficult to keep track of what workloads are running where, and that gets even harder as you add clusters. Sure, you could rely on people to manually clean up after themselves, but we all know that nobody really likes to clean up. So you’re likely going to end up with a lot of idle containers and wasted resources. And besides increased costs and management headaches, wasting resources is &lt;a href="https://www.youtube.com/watch?v=j5jql3e6hTA" rel="noopener noreferrer"&gt;terrible for the environment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In many cases, you don’t need applications running in your clusters to be always available, especially when these applications run in dev clusters and engineers don't work 24 hours a day. Let’s do some quick math on this: Say that a typical engineer at your company works 40 hours a week. A week contains 168 hours. If the applications in their dev environment run all of the time, that leaves 168 - 40 hours = 128 hours a week where applications are up and ready to take traffic even if the engineer is not working. That equals roughly 76% ( 128 idle hours / 168 hours) of every week. And that’s assuming the engineer is actively using that app for 40 hours, which isn’t likely because they will also be in meetings, grabbing lunch or working on non-coding tasks.&lt;/p&gt;

&lt;p&gt;What if I told you that you could automatically suspend workloads in your clusters when they’re not being used? Or even delete unused namespaces?&lt;/p&gt;

&lt;p&gt;Sleep Mode is one of our favorite features in &lt;a href="https://loft.sh/" rel="noopener noreferrer"&gt;Loft&lt;/a&gt; because of the significant benefits that our customers get from it and because it's so easy to measure the financial impact. Sleep Mode can automatically scale down your apps when they’re not used to save on cloud resources and overall cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Sleep Mode Works
&lt;/h2&gt;

&lt;p&gt;Sleep mode works based on ReplicaSets. Let’s say for example that you have an NGINX Deployment that is set to run 5 replicas. When Loft detects that the namespace the app is in has been idle for a predefined amount of time, it will automatically scale the NGINX ReplicaSet down to 0 replicas, deleting all of the pods that belong to this NGINX Deployment. Loft remembers that there should be 5 replicas running. Once it detects activity to the namespace (e.g. a kubectl request such as &lt;code&gt;kubect get pods&lt;/code&gt;), it will restore the previous number of 5 replicas, and Kubernetes will spin them back up again.&lt;/p&gt;

&lt;p&gt;Loft detects whether the namespace is idle by examining incoming API requests. The Loft API Gateway acts as a proxy for the Kubernetes API server, so it can see when a request is made for a particular namespace. As soon as it sees an API request coming in, like a kubectl command, it will fire the pods back up. That’s the case for any other API requests, like from Helm or other tooling you have in place that uses the kube-context for any of the clusters you connect to Loft.&lt;/p&gt;

&lt;p&gt;A developer could simply run &lt;code&gt;kubectl get pods -n namespace&lt;/code&gt; as soon as they sit down at their laptop, and within a few seconds, they’d have their dev environment back up. Since nothing has been changed in their namespace besides the numbers of replicas running, they’ll quickly be back to the place they left off at. And without wasting resources while they were away.&lt;/p&gt;

&lt;p&gt;Sleep mode works with both the classic Kubernetes namespaces and virtual clusters. &lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling Sleep Mode
&lt;/h2&gt;

&lt;p&gt;Let’s look at an example of enabling sleep mode for a user’s namespace. If you’d like to follow along and you’re not currently using Loft, you can run through &lt;a href="https://loft.sh/docs/quickstart" rel="noopener noreferrer"&gt;the first two steps of our quickstart&lt;/a&gt; to get Loft installed and running in your cluster.&lt;/p&gt;

&lt;p&gt;Add a user by clicking on the Users icon in the left navigation bar. Fill in the relevant information and hit the Create button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4gdav3c4x6eiocw5luu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4gdav3c4x6eiocw5luu.png" alt="Adding a user in Loft" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click on Clusters in the left navigation bar and then loft-cluster. Then click on the Accounts tab at the top of the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbelcki21hel06642c9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbelcki21hel06642c9o.png" alt="Viewing the cluster accounts in Loft" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click on the user that you added. This is their Loft account in the Kubernetes cluster. The settings for sleep mode are under the Space Creation Settings. A space is a virtual representation of a self-service namespace that has additional functionality (like sleep mode).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshoivr5f7lmeje01n3rd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshoivr5f7lmeje01n3rd.png" alt="Space creation settings" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you can adjust the number of minutes before sleep mode kicks in for this user’s spaces. You can also set a time to delete inactive spaces. Auto-delete is a great way to make sure that unused resources inside your clusters get cleaned up automatically.&lt;/p&gt;

&lt;p&gt;You could use sleep mode and auto-delete in conjunction, like setting the user’s spaces to sleep after 60 minutes of inactivity and to be deleted after one month of inactivity. Or whatever combination of values works best for your users’ workflows. You can also define separate sleep mode and auto-delete timeout values for each individual namespace if needed.&lt;/p&gt;

&lt;p&gt;And if you’re using virtual clusters, the process is the same. Each virtual cluster in Loft has a corresponding Loft space with the same name. You simply apply the sleep mode or auto-delete settings you want for the virtual cluster to either that space or the account that owns it.&lt;/p&gt;

&lt;p&gt;You probably don’t want to edit every individual account to enable sleep mode. There are more options, like creating an &lt;a href="https://loft.sh/docs/auth/account-templates" rel="noopener noreferrer"&gt;account template&lt;/a&gt; that will enable sleep mode for all accounts created, or adding an annotation to accounts if you create them with YAML and kubectl. This is particularly useful if you use single sign-on for Loft and you want to auto-configure sleep mode and auto-delete for certain Active Directory or Okta user groups, for example.&lt;/p&gt;

&lt;p&gt;There’s a lot of flexibility with how you can handle sleep mode, and there are &lt;a href="https://loft.sh/docs/self-service/sleep-mode" rel="noopener noreferrer"&gt;more details in the docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manually Triggering Sleep Mode
&lt;/h2&gt;

&lt;p&gt;Users in your Loft-managed clusters can also trigger sleep mode manually. Just click on Spaces in the left menu, and click the sleep icon above the space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hlaa7jywh1quf3b8qjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hlaa7jywh1quf3b8qjm.png" alt="Triggering sleep mode manually" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the user prefers the command line, they can also run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;loft sleep [SPACE_NAME]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Manually waking up spaces is just as easy. You can wake them up from the Spaces section of the web UI: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue34akvwv9p3d80apauq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue34akvwv9p3d80apauq.png" alt="Waking up a space" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we mentioned earlier, spaces will wake up automatically when they receive API requests, so you could also just run any kubectl command that touches that space, like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n [SPACE_NAME]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or use the Loft CLI to explicitly wake up a namespace:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;loft wakeup [SPACE_NAME]&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As you can see, there’s a lot of possibilities with sleep mode and auto-delete. Sleep mode can suspend workloads that aren’t being used, and auto-delete can automatically clean up idle Kubernetes namespaces and virtual clusters. These features can help you eliminate waste in your infrastructure and reduce some of the headaches that come with managing a large number of tenants operating in shared Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;At Loft Labs, we believe that self-service Kubernetes is essential, both for developers who don’t want to wait for infrastructure to be provisioned as well as for platform engineers who have huge backlogs and would rather not be responsible for creating every new namespace. With Loft, developers can provision namespaces and virtual clusters when needed, and platform engineers can ensure guardrails are in place to reduce waste. In the end, we all just want to do our jobs the best way we can, and self-service infrastructure is a critical part of making that happen.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>PHP Laravel Development with Kubernetes using DevSpace - Developer Edition</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Fri, 20 Aug 2021 17:43:07 +0000</pubDate>
      <link>https://dev.to/loft/php-laravel-development-with-kubernetes-using-devspace-developer-edition-305a</link>
      <guid>https://dev.to/loft/php-laravel-development-with-kubernetes-using-devspace-developer-edition-305a</guid>
      <description>&lt;p&gt;by Levent Ogut&lt;/p&gt;

&lt;p&gt;Kubernetes is an excellent open-source container orchestration platform that brings automatic scaling, automatic recovery, observability, and many more features. Since it differs from traditional operations, it has changed the development and deployment workflows as well. Debugging an application on Kubernetes can be a challenge. DevSpace is a tool that helps you develop, deploy, troubleshoot simple or complex applications. We will use a Laravel project to demonstrate its features; Laravel is a popular framework used by the PHP community with great features like extensibility, inheritance, and reusability with high customization options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;We will look at ways to deploy a Laravel based application into a Kubernetes cluster for development and production environments. We will develop our application while the application is running in Kubernetes as if we are developing locally. And we will be able to troubleshoot our application in real-time with ease.&lt;/p&gt;

&lt;p&gt;The desired setup uses four containers, in three pods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PHP-FPM container, which processes all the PHP assets.&lt;/li&gt;
&lt;li&gt;Nginx container, which serves static files and acts as a reverse proxy for the PHP assets.&lt;/li&gt;
&lt;li&gt;MySQL container, as the database.&lt;/li&gt;
&lt;li&gt;Redis container, as session and cache.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction to DevSpace
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery is a challenge while developing on Kubernetes. Without using a special tool, you need to build and deploy every time code or assets change. &lt;a href="https://devspace.sh/" rel="noopener noreferrer"&gt;DevSpace&lt;/a&gt; handles this seamlessly either by synchronizing files and hot reload of the container in question or automatically rebuilding and deploying the image(s) required. DevSpace allows you to develop in a Kubernetes cluster as if you are developing in your local machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feature Highlights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agile development on local and remote Kubernetes clusters&lt;/strong&gt;. Execution of entire continuous development and deployment pipeline, and a single command to deploy all components of your application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Declarative configuration kept in source code&lt;/strong&gt;, in the devspace.yaml file. All of the development, deployment, and pre/post-deployment actions are defined in a single file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hot Reloading for faster feedback&lt;/strong&gt;. Instead of building and re-deploying artifacts, DevSpace allows you to use high-performance and bi-directional file synchronization. This allows changes to trigger a hot-reload on the deployed container. All of these features are highly configurable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt;. You can extend the functionality of DevSpace via the plugin system. Hooks and commands are also built-in constructs; you can expand the functionality heavily used in CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy clean Up&lt;/strong&gt;. You can delete the resources created via &lt;code&gt;devspace purge\&lt;/code&gt; in a single simple step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Only&lt;/strong&gt;. DevSpace doesn't require server/cluster side components. A single executable on a local machine is sufficient to develop, troubleshoot and deploy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Requirements and Setting Up Development Environment
&lt;/h2&gt;

&lt;p&gt;The following tools should be installed on your local development machine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubectl, documentation &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Helm, documentation &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;DevSpace, installation instructions &lt;a href="https://devspace.sh/cli/docs/getting-started/installation" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Developing with DevSpace
&lt;/h2&gt;

&lt;p&gt;First, let's start with the code. Clone the repository to your local development machine as follows. This code includes a vanilla Laravel installation, a Dockerfile, and a devspace.yaml prepopulated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone git@github.com:loft-sh/devspace-php-laravel-nginx.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy .env.example to .env.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now open the .env file and modify where necessary, like adjusting port numbers if needed.&lt;/p&gt;

&lt;p&gt;After this, we can generate the Laravel APP_KEY variable via the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace run generate-key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Review the variables by running the &lt;code&gt;devspace list vars&lt;/code&gt; command, and set variables where necessary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace list vars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DevSpace will ask you a few questions regarding the image repository and other variables not defined in the .env file, and then show the output of the variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Variable              Value                                                
 APP_DEBUG             true                                                 
 APP_IMAGE             leventogut/php-laravel-nginx-devspace                
 APP_KEY               xxxxxxxxxxxxxxxx  
 ASSET_VOLUME_NAME     static-asset-volume                                  
 ASSET_VOLUME_SIZE     1Gi                                                  
 DB_DATABASE           laravel                                              
 DB_HOST               mysql                                                
 DB_MYSQL_VERSION      8.0.23                                               
 DB_PASSWORD           xxxxxxxxxxxxxxxx                                     
 DB_PORT               3306                                                 
 DB_ROOT_PASSWORD      xxxxxxxxxxxxxxxx                                     
 DB_USERNAME           laravel                                              
 NGINX_CONFIG_HASH     740941                                               
 NGINX_IMAGE_VERSION   1.9                                                  
 REDIS_PASSWORD        xxxxxxxxxxxxxxxx                                    
 REDIS_VERSION         6.0.12  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running devspace dev
&lt;/h3&gt;

&lt;p&gt;DevSpace is context-aware; it follows your Kubernetes config to determine the Kubernetes cluster to deploy on.  However, It is good practice to set the context and namespace to use with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace use context docker-desktop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info]   Your kube-context has been updated to 'docker-desktop'
         To revert this operation, run: devspace use context maple-staging

[done] √ Successfully set kube-context to 'docker-desktop'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace use namespace laravel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info]   The default namespace of your current kube-context 'docker-desktop' has been updated to 'laravel'
         To revert this operation, run: devspace use namespace 

[done] √ Successfully set default namespace to 'laravel'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have all the variables and the configs, we can start in-cluster development:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DevSpace will build the artifacts we have defined in the &lt;code&gt;devspace.yaml&lt;/code&gt;, deploy all components, and start log streaming from the configured containers. This might take a few minutes.&lt;/p&gt;

&lt;p&gt;In a few minutes, DevSpace will open a browser window showing a login screen. Previously, we have installed the laravel/ui package to test MySQL and Redis. Simply register as a new user, and you will be redirected to the index page. The index page has several links, including a link to the ping/pong route we will use in a few steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4hb4shgy5o9eeaadbx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4hb4shgy5o9eeaadbx0.png" alt="Screenshot of the PHP app's UI" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow
&lt;/h3&gt;

&lt;p&gt;At this stage, we have deployed our application into the Kubernetes cluster, and DevSpace is watching any changes on the project directory.&lt;/p&gt;

&lt;p&gt;Now, having started DevSpace in development mode, we can change our code and see the immediate effect on our application that is running in the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Open the web.php file under the routes directory with your favorite editor. And paste the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="nc"&gt;Route&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ping'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;"pong"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any asset added to the repo folder will also be synced (according to the sync rules defined) to the container.&lt;/p&gt;

&lt;p&gt;At this stage, you can try adding controllers, routes, dependencies and observe the ease of development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Port Forwarding and Reverse Port Forwarding
&lt;/h3&gt;

&lt;p&gt;You can reach the application via port forwarding. These can be defined in the &lt;code&gt;devspace.yaml&lt;/code&gt; file. In the current configuration, the Nginx container's port 80 is forwarded to local port 8080. The browser will be automatically opened after a successful deployment and start of the containers.&lt;/p&gt;

&lt;p&gt;You can configure reverse port forwarding as well, which is very useful for certain debugging tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commands
&lt;/h3&gt;

&lt;p&gt;Our sample &lt;code&gt;devspace.yaml&lt;/code&gt; includes some Laravel and MySQL-specific commands to ease development workflow.&lt;/p&gt;

&lt;p&gt;You can run any artisan, composer, php, and npm commands and additionally drop into a MySQL shell with a single mysql command.&lt;/p&gt;

&lt;p&gt;You can list available commands via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace list commands
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Name           Command                                                            Description
 artisan        devspace enter -c php -- php artisan                               Entry point for artisan commands.
 composer       devspace enter -c php -- composer                                  Entry point for composer commands.
 php            devspace enter -c php -- php                                       Entry point for PHP commands.
 npm            devspace enter -c php -- npm                                       Entry point for NPM commands.
 generate-key   TMP_FILE=.devspace/app_key.tmp &amp;amp;&amp;amp; docker run --rm -v $PWD:/ap...   Generate APP_KEY.
 mysql          devspace enter -c mysql -- mysql -h'mysql' -P'3306' -u'larave...   Enter to MySQL shell.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try these commands to get familiar with them in your workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hooks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://devspace.sh/cli/docs/configuration/hooks/basics" rel="noopener noreferrer"&gt;Hooks&lt;/a&gt; are a quite valuable feature of DevSpace. With hooks, you can run commands before and after certain deployments.&lt;/p&gt;

&lt;p&gt;We have defined several hooks in the devspace.yaml file, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Changing MySQL user and password&lt;/li&gt;
&lt;li&gt;Running &lt;code&gt;npm run watch&lt;/code&gt; on the PHP container.&lt;/li&gt;
&lt;li&gt;Reloading Nginx to re-read the configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploying to Production
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;devspace deploy&lt;/code&gt; command will deploy our application into the environment we define. DevSpace configuration allows us to modify and alter our parameters based on profiles. This flexibility brings numerous configuration options for development, staging, production environments. DevSpace configuration can hold all different parameters and configurations for each environment. Generally speaking, it is a good practice to create a production profile for deployment, which will remove troubleshooting aids and set parameters accordingly.&lt;/p&gt;

&lt;p&gt;Our prepared &lt;code&gt;devspace.yaml&lt;/code&gt; consists of a production profile that will remove the additions we have made to make developing and troubleshooting easy.&lt;/p&gt;

&lt;p&gt;Deploy with production profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace deploy &lt;span class="nt"&gt;-p&lt;/span&gt; production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting with DevSpace
&lt;/h2&gt;

&lt;p&gt;Troubleshooting and debugging are pretty straightforward with DevSpace. DevSpace provides aid for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logging&lt;/li&gt;
&lt;li&gt;Entering into containers&lt;/li&gt;
&lt;li&gt;Running commands inside the containers&lt;/li&gt;
&lt;li&gt;Interactive mode&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Entering to and Working with Containers
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;devspace enter&lt;/code&gt; command allows you to open a shell to any of the running containers by providing the container name, so you don't have to deal with the copy/paste of long pod names.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace enter &lt;span class="nt"&gt;-c&lt;/span&gt; php
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info]   Using namespace 'default'
[info]   Using kube context 'docker-desktop'
[info]   Opening shell to pod:container app-0:php
root@app-0:/var/www/html# 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a container is not specified, a selector will be displayed, and you can choose from the available containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;? Which pod do you want to open the terminal for?
  [Use arrows to move, type to filter]
&amp;gt; redis-master-0:redis  app-0:nginx
  app-0:php
  mysql-0:mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;As you can execute any command in the container, running the tests you have for the application is a breeze.&lt;/p&gt;

&lt;p&gt;You can easily run phpunit or artisan test commands for running your tests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace run php ./vendor/bin/phpunit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info]   Using namespace 'default'
[info]   Using kube context 'docker-desktop'
[info]   Opening shell to pod:container app-0:php
PHPUnit 9.5.3 by Sebastian Bergmann and contributors.

.F                                                                  2 / 2 (100%)

Time: 00:00.122, Memory: 20.00 MB

There was 1 failure:

1) Tests\Feature\ExampleTest::testBasicTest
Expected status code 200 but received 302.
Failed asserting that 200 is identical to 302.

/var/www/html/vendor/laravel/framework/src/Illuminate/Testing/TestResponse.php:187
/var/www/html/tests/Feature/ExampleTest.php:19

FAILURES!
Tests: 2, Assertions: 2, Failures: 1.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run &lt;code&gt;artisan test&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;devspace run artisan &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info]   Using namespace 'default'
[info]   Using kube context 'docker-desktop'
[info]   Opening shell to pod:container app-0:php

   PASS  Tests\Unit\ExampleTest
  ✓ basic test

   FAIL  Tests\Feature\ExampleTest
  ⨯ basic test

  ---

  • Tests\Feature\ExampleTest &amp;gt; basic test
  Expected status code 200 but received 302.
  Failed asserting that 200 is identical to 302.

  at tests/Feature/ExampleTest.php:19
     15▕     public function testBasicTest()
     16▕     {
     17▕         $response = $this-&amp;gt;get('/');
     18▕ 
  ➜  19▕         $response-&amp;gt;assertStatus(200);
     20▕     }
     21▕ }
     22▕ 


  Tests:  1 failed, 1 passed
  Time:   0.19s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CI/CD
&lt;/h2&gt;

&lt;p&gt;DevSpace configuration can hold many profiles and can be used for different deployment options. It is common to see developers use DevSpace for their CI/CD pipeline as well. Deploying your application into the CI/CD pipeline is relatively straightforward. The ability to choose from various profiles makes it a breeze to switch from development to staging and production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up
&lt;/h2&gt;

&lt;p&gt;You can easily clean your environment with the &lt;code&gt;devspace purge\&lt;/code&gt; command. This will be deleting all deployments. Please note that purge will not delete persistent storage(s).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have seen DevSpace in action while developing, deploying, and troubleshooting. We have seen that once DevSpace is configured, it can encompass all deployment options within it. So it can be used for development and deployment to any environment. The ability to change profiles, add new commands, and execute any hooks is advantageous.&lt;/p&gt;

&lt;p&gt;The second part of this series will delve into how to configure DevSpace, and we will go over the many possible configuration options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://devspace.sh/cli/docs/introduction" rel="noopener noreferrer"&gt;DevSpace Docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@benofthenorth?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Ben&lt;/a&gt; on &lt;a href="https://unsplash.com/@benofthenorth?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devspace</category>
      <category>php</category>
      <category>laravel</category>
    </item>
    <item>
      <title>Kubernetes Monitoring Dashboards - 5 Best Open-Source Tools</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Tue, 17 Aug 2021 16:40:28 +0000</pubDate>
      <link>https://dev.to/richburroughs/kubernetes-monitoring-dashboards-5-best-open-source-tools-3npl</link>
      <guid>https://dev.to/richburroughs/kubernetes-monitoring-dashboards-5-best-open-source-tools-3npl</guid>
      <description>&lt;p&gt;by Tyler Charbonneau&lt;/p&gt;

&lt;p&gt;Kubernetes now runs in &lt;a href="https://www.redhat.com/en/resources/kubernetes-adoption-security-market-trends-2021-overview" rel="noopener noreferrer"&gt;more than 70 percent&lt;/a&gt; of container environments. Monitoring has become a key way to extract as much information as possible during container runtime. This data is critical when troubleshooting issues. It’s also integral to optimizing performance, both proactively and reactively. &lt;/p&gt;

&lt;p&gt;However, Kubernetes presents a unique challenge on two fronts: setup and monitoring. To begin, it’s difficult to really nail your deployment in an organized, high-performing way. Common mistakes involve incorrectly sizing your nodes, consolidating containers, or properly creating namespaces. Making resource requests via a configuration file or &lt;code&gt;kubectl&lt;/code&gt; requires strong forethought. &lt;/p&gt;

&lt;p&gt;Consider this: &lt;a href="https://www.datadoghq.com/container-report/" rel="noopener noreferrer"&gt;roughly 49 percent of containers use under 30 percent of their requested CPU allocation&lt;/a&gt;, and 45 percent of containers use less than 30 percent of their allotted memory. Real-time monitoring can help prevent these problems. Idle resources are expensive and don’t provide any real benefit to your ecosystem. &lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics Tracking
&lt;/h2&gt;

&lt;p&gt;These missteps can require mitigation sooner or later. Accordingly, reliably tracking runtime metrics like latency, CPU utilization, and memory usage is often tricky. &lt;a href="https://www.vmware.com/topics/glossary/content/kubernetes-monitoring" rel="noopener noreferrer"&gt;Other important metrics include the following&lt;/a&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster state metrics—like pod health and availability&lt;/li&gt;
&lt;li&gt;Node status—including readiness status, CPU/memory/disk load, and network status&lt;/li&gt;
&lt;li&gt;Pod availability&lt;/li&gt;
&lt;li&gt;Disk utilization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kubernetes doesn’t always excel at displaying this data in a meaningful and readable way. It’s up to you—the DevOps professional—to piece together bespoke solutions. Designing a custom dashboard is difficult and time-consuming. Thankfully, a number of third-party vendors have created capable visualization tools for Kubernetes. Because these tools are open source, they also interface effectively with some adjacent technologies. &lt;/p&gt;

&lt;p&gt;Want to keep your Kubernetes deployment healthy and running? Follow along to learn more about these best open-source Kubernetes dashboards. &lt;/p&gt;

&lt;h2&gt;
  
  
  Evaluation Criteria
&lt;/h2&gt;

&lt;p&gt;How do you judge what makes a tool favorable? For analysis purposes, we’ll primarily dive into these assessment categories: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metrics availability&lt;/li&gt;
&lt;li&gt;Usability&lt;/li&gt;
&lt;li&gt;Setup and maintenance requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These comparisons are more nuanced than just comparing hard numbers. It’s important to look at standout features and any features that distinguish one tool from the next. Each solution’s “secret sauce,” as it were, could be uniquely beneficial for your custom deployment. There are also many subjective ways to measure a tool’s worth. Quality of documentation, information presentation, and even graphical user interface (GUI) differences can shape tooling opinions. &lt;/p&gt;

&lt;p&gt;This guide focuses predominantly on objective aspects but will introduce other notable characteristics that might sway your decision. Balancing functional needs with personal preferences is critical. Here are some top picks on both the server and client sides:&lt;/p&gt;

&lt;h2&gt;
  
  
  Server-Side Tools
&lt;/h2&gt;

&lt;p&gt;Many teams might opt for server-side monitoring tools to capture Kubernetes data. Kubernetes natively captures resource utilization data and aggregates it in a database. This is known as the &lt;strong&gt;resource metrics pipeline&lt;/strong&gt;. &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline" rel="noopener noreferrer"&gt;Both the Horizontal Pod Autoscaler controller and the &lt;code&gt;kubectl-top&lt;/code&gt; utility&lt;/a&gt; generate this data during usage—which Kubernetes temporarily collects via an in-memory metrics-server. This information is exposed through the &lt;code&gt;metrics.k8s.io&lt;/code&gt; API, which allows external services to tap into usage data. &lt;/p&gt;

&lt;p&gt;The metrics-server requests all resource metrics from discovered cluster nodes via &lt;code&gt;kubelet&lt;/code&gt;. Furthermore, &lt;code&gt;kubelet&lt;/code&gt; will dig deeper by translating pods into associated containers—ultimately exposing that information with the Resource Metrics API. &lt;/p&gt;

&lt;p&gt;Additionally, DevOps teams can leverage the full metrics pipeline to view more intricate data. This also taps into nodes via &lt;code&gt;kubelet&lt;/code&gt;, but either the &lt;code&gt;custom.metrics.k8s.io&lt;/code&gt; or &lt;code&gt;external.metrics.k8s.io&lt;/code&gt; APIs expose that information instead. Note that Kubernetes can react natively based on these gathered metrics to counteract problems. The onus isn’t entirely on the development side. &lt;/p&gt;

&lt;p&gt;Server-based metrics tracking has some significant advantages. This collection method is reliable due to its simplified collection approach; you don’t have to wrestle with thousands of nodes, pods, or containers individually. Because server tracking offloads the burden from your Kubernetes infrastructure, you’ll also see a performance improvement. It’s relatively easy to wrangle your data as required from the server once it’s up and running. &lt;/p&gt;

&lt;p&gt;However, no system is perfect, and this fact also applies to server-side monitoring. These solutions can be harder to configure, as you have to install essential components within your system that can effectively transmit data elsewhere. There can be more failure points across the system. Server-side transitions can also be costlier. The overall breadth of the server-side data you collect might be more limited—or at least be missing some crucial insights of interest. Server logs aren’t always human-readable, and logs might only be retained for a limited time unless they’re archived. &lt;/p&gt;

&lt;p&gt;That said, millions of users have come to love their server-side monitoring tools. Here are some of the top dogs in the category that have become household names: &lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Dashboard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqou4x3u034at6gzvagyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqou4x3u034at6gzvagyu.png" alt="Screenshot of the Kubernetes Dashboard"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Image courtesy of &lt;a href="https://medium.com/devsondevs/install-kubernetes-dashboard-part-iii-fdbc88eeb7a5" rel="noopener noreferrer"&gt;Chuka Ofili&lt;/a&gt;.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Offered natively through the web browser, the &lt;a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="noopener noreferrer"&gt;Kubernetes Dashboard&lt;/a&gt; operates as a web app, offering detailed insight into your containerized environment. While some open-source solutions are read-only, the dashboard allows you to deploy, troubleshoot, and actively manage system components. The tool excels at managing both system resources (Jobs, DaemonSets, deployments) and applications. It’s essential to monitor both the system and the microservices that run atop it. &lt;/p&gt;

&lt;p&gt;You can also monitor the following via the Dashboard, all accessed easily via the left-hand sidebar:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Namespaces&lt;/li&gt;
&lt;li&gt;Nodes&lt;/li&gt;
&lt;li&gt;Persistent volumes&lt;/li&gt;
&lt;li&gt;Roles&lt;/li&gt;
&lt;li&gt;Storage classes&lt;/li&gt;
&lt;li&gt;Cron jobs&lt;/li&gt;
&lt;li&gt;Replica sets and replication controllers&lt;/li&gt;
&lt;li&gt;Stateful sets&lt;/li&gt;
&lt;li&gt;Discovery and load balancing parameters&lt;/li&gt;
&lt;li&gt;Configurations and storage setups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these categories has a detailed view with easy-to-read infographics. Because the Kubernetes Dashboard is web-based, you can access it anywhere at any time. This makes it a fantastic, platform-agnostic tool that functions admirably regardless of one’s operating system—or even cluster architecture. To deploy a containerized application, simply connect your YAML or JSON configuration file with the integrated setup wizard.&lt;/p&gt;

&lt;p&gt;On the logging side, the built-in Logs Viewer pulls records from containers belonging to single pods and displays them in a list format. That output is organized in chronological order for easier parsing. You can download these specific logs as needed with a single click. &lt;/p&gt;

&lt;p&gt;The Dashboard isn’t installed by default. Run the following command to deploy it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access Dashboard via &lt;code&gt;kubectl&lt;/code&gt; by running the &lt;code&gt;kubectl proxy&lt;/code&gt; command. Only machines having executed this command can view the Dashboard UI. Additionally, your deployment will adhere to simple role-based access control (RBAC) standards, exclusively necessitating a Bearer token for successful login. &lt;/p&gt;

&lt;h3&gt;
  
  
  Skooner (Formerly K8dash)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z07yx07van68s45x5oq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z07yx07van68s45x5oq.png" alt="Screenshot of the Skooner UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having received a face-lift and new name, &lt;a href="https://github.com/skooner-k8s/skooner" rel="noopener noreferrer"&gt;Skooner&lt;/a&gt; continues to be a leading open-source tool for holistically monitoring Kubernetes. The developers behind the project tout the simplicity and real-time availability of their solution—no refreshes or manual polling is required to fetch system data as it’s collected. Additionally, the YAML provided within the tool’s resource repository allows you to start leveraging Skooner in just a minute’s time. &lt;/p&gt;

&lt;p&gt;There’s very little of a learning curve involved on the setup side. All you need is a running Kubernetes cluster, a recommended metrics-server installed, and an optional OpenID Connect configuration. Unlike with the Kubernetes Dashboard, you can log into Skooner using one of three methods: a service account token, OpenID Connect (OIDC), or via NodePort. The first is the easiest while the last is the fastest, per the developers. Those favoring OpenID will naturally gravitate toward OIDC. Note that &lt;code&gt;kube proxy&lt;/code&gt; cannot be used to access this Dashboard, as the Authorization header is stripped during execution. &lt;/p&gt;

&lt;p&gt;Here’s what you can visualize using Skooner: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Namespaces&lt;/li&gt;
&lt;li&gt;Nodes&lt;/li&gt;
&lt;li&gt;Pods&lt;/li&gt;
&lt;li&gt;Replica sets&lt;/li&gt;
&lt;li&gt;Deployments&lt;/li&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;RBAC configurations&lt;/li&gt;
&lt;li&gt;Workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skooner relies heavily on metrics-server to pull runtime metrics. Without this component installed, the platform’s functionality will suffer to a degree. It’ll be much more difficult to summon utilization data, for instance, without tapping into that pipeline. &lt;/p&gt;

&lt;p&gt;However, Skooner brings its mobile app to the table. It runs on most phones or tablets, allowing you to keep tabs on crucial metrics while you’re on the go. The solution is highly scalable—responding well to changes in Kubernetes system configurations and growth while continuing to grab relevant information reliably. &lt;/p&gt;

&lt;h3&gt;
  
  
  Prometheus UI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5ae8oixfx0mh4yqnd0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5ae8oixfx0mh4yqnd0f.png" alt="Screenshot of the Prometheus UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Image courtesy of &lt;a href="https://medium.com/@chris_linguine/how-to-monitor-your-kubernetes-cluster-with-prometheus-and-grafana-2d5704187fc8" rel="noopener noreferrer"&gt;Christiaan Vermeulen&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Last but not least, &lt;a href="https://prometheus.io" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; is a wildly popular open-source tool maintained by the Cloud Native Computing Foundation (CNCF). As such, it enjoys significant backing and development support by the community at large. You’ll notice an immediate GUI difference between this and the other entrants on our list; Prometheus uses a darker palette and arranges its visualizations a little differently. The information is displayed more densely according to endpoint, host, and port. Prometheus helps monitor the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU utilization (including core counts)&lt;/li&gt;
&lt;li&gt;RAM usage (including total available)&lt;/li&gt;
&lt;li&gt;SWAP memory usage (from total)&lt;/li&gt;
&lt;li&gt;Root filesystem usage (from total)&lt;/li&gt;
&lt;li&gt;CPU system load (per interval average)&lt;/li&gt;
&lt;li&gt;Uptime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prometheus stores data as a time series, where metrics have streams of time-stamped values. Each metric also has its own set of labeled dimensions. These are typically stored, though Prometheus can generate temporary series from proprietary PromQL queries. Visually, metrics are commonly displayed as graphs and the like by linking Prometheus with Grafana—which pulls from an assigned data source. &lt;/p&gt;

&lt;p&gt;Getting started with Prometheus requires you to install its exporter component on each relevant Kubernetes node. This acts as a service that streams runtime data to the database and dashboard later on. You can set up a multidimensional data model using queries and key-value pairings. &lt;/p&gt;

&lt;p&gt;You can &lt;a href="https://prometheus.io/docs/prometheus/latest/installation/" rel="noopener noreferrer"&gt;install Prometheus&lt;/a&gt; using a pre-compiled binary, via Docker images and volumes or configuration management systems like Chef or Ansible. Prometheus is also good at self-monitoring through the use of included APIs. Finally, Prometheus’ massive community can provide help with this task or any other facet of the monitoring process. &lt;/p&gt;

&lt;h2&gt;
  
  
  Client-Only Tools
&lt;/h2&gt;

&lt;p&gt;As opposed to server-based monitoring, client-only tools are best for teams needing an easy solution without excessive configuration. They’re generally cheaper and have lower barriers to entry. Those running stateless applications in particular might benefit from client-side monitoring, as critical session or resource data isn’t typically stored on the server. Here are our picks: &lt;/p&gt;

&lt;h3&gt;
  
  
  Lens by Mirantis
&lt;/h3&gt;

&lt;p&gt;As an ops-focused monitoring tool, &lt;a href="https://www.mirantis.com/software/lens/" rel="noopener noreferrer"&gt;Lens&lt;/a&gt; is a popular Kubernetes integrated development environment (IDE) that acts as a multidisciplinary continuous integration/continuous delivery (CI/CD) platform. The service bundles a contextual terminal with Prometheus-derived statistics while ensuring that logs are easily viewable. Monitored clusters may either be local or external. Accordingly, you can even add a cluster into the mix by importing a kubeconfig file. &lt;/p&gt;

&lt;p&gt;Clusters and their tracked metrics are separated into working groups—useful for different teams or for maintaining segregation within complex deployments. You can summon real-time graphs within the Lens dashboard that are tailored to each namespace and resource. Thanks to included RBAC controls, you can define which users can access specific metrics for greater security. Lens remains open source and free to use. &lt;/p&gt;

&lt;h3&gt;
  
  
  Octant by VMWare
&lt;/h3&gt;

&lt;p&gt;Like the Kubernetes Dashboard, &lt;a href="https://octant.dev" rel="noopener noreferrer"&gt;Octant&lt;/a&gt; is an open-source web interface for visualizing your clusters and applications. The solution supports multiple plugins via a core gRPC API, making Octant extensible and richly featured. Like other tools, it provides real-time updates on the health and performance of your cluster’s objects plus related objects. This detailed metrics tracking is meant to simplify the debugging process and highlight problems before they become threatening. Building off of &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;kustomize&lt;/code&gt;, Octant is a simple and reliable tool for managing the Kubernetes system as a whole.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We work in a fast-paced industry, and the tools available for Kubernetes can change quickly. No matter what, your choice of tools will chiefly depend on your needs and the unique deployment you’ve built with Kubernetes. You’ll want to consider your team’s experience and comfort level when choosing an approach—either server-based or client-based. Open-source tools for monitoring Kubernetes are thankfully robust and widespread across the marketplace. If you harness real-time information, keeping your Kubernetes deployment healthy and operative should be much simpler.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@wwarby?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;William Warby&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/measuring-tape?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Save Costs With Virtual Kubernetes Clusters</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Fri, 30 Jul 2021 17:21:31 +0000</pubDate>
      <link>https://dev.to/loft/save-costs-with-virtual-kubernetes-clusters-ehg</link>
      <guid>https://dev.to/loft/save-costs-with-virtual-kubernetes-clusters-ehg</guid>
      <description>&lt;p&gt;by Fabian Kramm&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/loft-sh/vcluster" rel="noopener noreferrer"&gt;Virtual Kubernetes clusters&lt;/a&gt; are fully functional Kubernetes clusters that run within another Kubernetes cluster. The difference between a regular Kubernetes namespace and a virtual cluster is that a virtual cluster has its own separate Kubernetes control plane and storage backend. Only a handful of core resources, such as pods and services, are actually shared among the virtual and host cluster. All other resources, such as CRDs, statefulsets, deployments, webhooks, jobs, etc., only exist in the pure virtual Kubernetes cluster. &lt;/p&gt;

&lt;p&gt;This provides a lot better isolation than a regular Kubernetes namespace and decreases the pressure on the host Kubernetes cluster as API requests to the virtual Kubernetes cluster in most cases do not reach the host cluster at all. In addition, all created resources by the virtual cluster are also tied to a single namespace in the host cluster, no matter in which virtual cluster namespace you create those resources in.&lt;/p&gt;

&lt;p&gt;With version &lt;a href="https://github.com/loft-sh/vcluster/releases/tag/v0.3.0" rel="noopener noreferrer"&gt;v0.3.0&lt;/a&gt;, &lt;a href="https://vcluster.com" rel="noopener noreferrer"&gt;vcluster&lt;/a&gt; an open source implementation of the virtual Kubernetes cluster pattern and that builds upon the lightweight Kubernetes distribution &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;k3s&lt;/a&gt;, now also became a &lt;a href="https://www.cncf.io/certification/software-conformance/" rel="noopener noreferrer"&gt;certified Kubernetes distribution&lt;/a&gt; and is 100% Kubernetes API compatible. This makes virtual clusters now even more interesting to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can Virtual Kubernetes clusters decrease costs?
&lt;/h2&gt;

&lt;p&gt;In essence, virtual Kubernetes clusters are a trade-off between namespaces and separate Kubernetes clusters. They are easier and cheaper to create than fully blown clusters, but they are not as well isolated as completely separate clusters, since they still interact with the host Kubernetes cluster and create the actual workloads in it. On the other hand, they provide much better isolation than namespaces. Virtual clusters use a completely separate control plane, and within a virtual cluster you have full cluster-wide control access. Nonetheless, a single namespace is still cheaper and easier to create. &lt;/p&gt;

&lt;p&gt;The table below summarizes the differences between Namespaces, virtual Kubernetes clusters, and fully separate clusters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjras5vzimsf9fvzecozt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjras5vzimsf9fvzecozt.png" alt="Virtual clusters compared with namespaces and traditional clusters" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The important takeaway from this is that virtual Kubernetes clusters provide a new alternative to both namespaces and separate clusters. Virtual Kubernetes clusters provide an excellent opportunity to replace separate clusters and drastically reduce your infrastructure and management costs, especially in scenarios where you have at least basic trust in your tenants (say separate teams across your company, CI/CD pipelines, or even several trusted customers).&lt;/p&gt;

&lt;h2&gt;
  
  
  An Example Scenario
&lt;/h2&gt;

&lt;p&gt;Let's say you are a company that provides some sort of SaaS service, and you have around 100 developers distributed across 20 teams that implement different parts of the service. For each of those 20 teams, you provisioned separate Kubernetes clusters to test and develop the application, as this was the easiest and most flexible approach. Each team’s cluster has at least three nodes to guarantee availability and then automatically scales up and down based on usage.&lt;/p&gt;

&lt;p&gt;Your minimum infrastructure bill in Google Cloud might look like this over 12 months (according to the &lt;a href="https://cloud.google.com/products/calculator" rel="noopener noreferrer"&gt;Google Cloud Pricing Calculator&lt;/a&gt;):&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Node Cost:&lt;/strong&gt;&lt;br&gt;
20 Clusters * 3 Nodes (n1-standard-1) = 12 * 20 * $72.82 = $17,476.8&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GKE Management Cost:&lt;/strong&gt;&lt;br&gt;
20 Clusters (Zonal) = 12 * 20 * $71.60 = $17,184&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total Cost Per Year (Without Traffic etc.):&lt;/strong&gt;&lt;br&gt;
$17,476.80 + $17,184 = $34,660.80&lt;/p&gt;




&lt;p&gt;In total, you are looking at a minimum estimated raw node + management cost of about &lt;em&gt;$35,000&lt;/em&gt;. Obviously, you could still fine-tune certain aspects here, for example reducing the minimum node pool size or using preemptive nodes instead of regular nodes. &lt;/p&gt;

&lt;p&gt;The advantages of this setup are clear. Each team has its own separate Kubernetes cluster and endpoint to work with and can install cluster-wide resources and dependencies (such as a custom service-mesh, ingress controller, or monitoring solution). On the other hand, you'll also notice that the cost is quite high. Resource sharing across teams is rather difficult, and there is a huge overhead if certain clusters are not used at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Switching to virtual Kubernetes clusters
&lt;/h2&gt;

&lt;p&gt;This is a perfect example where virtual Kubernetes clusters could come in handy. Instead of 20 different GKE clusters, you would create a single GKE Kubernetes cluster and then deploy 20 virtual Kubernetes clusters within it. Now each team gets access to only a single virtual Kubernetes cluster endpoint that essentially maps to a single namespace in the underlying GKE cluster. &lt;/p&gt;

&lt;p&gt;The really great part about this is that from the developers’ perspective, nothing has changed. Each team can still create all the cluster services they want within their own virtual cluster, such as deploy their own Istio service mesh, custom cert-manager version, Prometheus stack, Kafka operator, etc. without affecting the host cluster services. They can essentially use it the same way as they would have used the separate cluster before.&lt;/p&gt;

&lt;p&gt;Another benefit is that the setup is now much more resource-efficient. Since the virtual Kubernetes clusters and all of their workloads are also just simple pods in the host GKE cluster, you can leverage the full power of the Kubernetes scheduler. So, for example, if a team is on vacation or is not using the virtual Kubernetes cluster at all, there will be no pods scheduled in the host cluster consuming any resources. In general, this means the node resource utilization of the GKE cluster should now be much better than before.&lt;/p&gt;

&lt;p&gt;Another significant advantage with a single GKE cluster and multiple virtual clusters in it is that you as the infrastructure team can centralize certain services in the host cluster, such as a central ingress controller, service mesh, metrics, or logging solutions instead of installing it every time into all of the separate clusters. The virtual clusters will be able to consume those services, or if the teams prefer they can still add their own. Teams will also be able to access each other’s services if that is needed, which would be very difficult with completely separate clusters.&lt;/p&gt;

&lt;p&gt;Furthermore, you will save on the cloud provider Kubernetes management fees by sharing resources better. And vcluster is open source, so it’s self-managed and completely free. The new cost estimate would look a little bit more like this if you would reserve a node for each team and add three extra nodes as a high availability buffer:&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Node Cost:&lt;/strong&gt;&lt;br&gt;
1 Clusters * 23 Nodes (n1-standard-1) = 12 * $558.26 = $6699.12&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GKE Management Cost:&lt;/strong&gt;&lt;br&gt;
1 Clusters (Zonal) = 12 * 1 * $71.60 = $859.20&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total Cost Per Year (Without Traffic etc.):&lt;/strong&gt;&lt;br&gt;
$6699.12 + $859.2 = $7558.32&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total Cost Savings:&lt;/strong&gt;&lt;br&gt;
$34,660.80 - $7558.32 = $27,102.48 (78.2% savings)&lt;/p&gt;




&lt;p&gt;In this case, your minimum node &amp;amp; GKE management fee infrastructure bill would be cut down by 78.2%. This is a rather constructed example, but it shows the considerable potential of virtual clusters. For the teams using the virtual clusters, essentially nothing would change because they would still have access to a fully functional Kubernetes cluster where they could deploy their workloads and cluster services freely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://vcluster.com" rel="noopener noreferrer"&gt;Virtual Kubernetes clusters&lt;/a&gt; are a third option if you have to decide between namespaces or separate clusters. Virtual clusters will probably never completely replace the need for separate clusters. Still, they have significant advantages if your use case fits, as you can save significant infrastructure and management costs with them. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@kellysikkema?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Kelly Sikkema&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/calculator?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Let's Learn vcluster with Saiyam Pathak</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Tue, 13 Jul 2021 17:06:30 +0000</pubDate>
      <link>https://dev.to/loft/let-s-learn-vcluster-with-saiyam-pathak-307i</link>
      <guid>https://dev.to/loft/let-s-learn-vcluster-with-saiyam-pathak-307i</guid>
      <description>&lt;p&gt;Loft Labs CEO Lukas Gentele joined &lt;a href="https://twitter.com/SaiyamPathak" rel="noopener noreferrer"&gt;Saiyam Pathak&lt;/a&gt; on his stream to talk about &lt;a href="https://vcluster.com" rel="noopener noreferrer"&gt;vcluster&lt;/a&gt;. This video is another great way to get up to speed on vlcuster if you've heard about it and are curious. In the video, Lukas walks Saiyam through the different vcluster features, including the workflow, how to use it with Ingress, and storage considerations.&lt;/p&gt;

&lt;p&gt;Saiyam creates a ton of great content about Kubernetes. Check out the other videos &lt;a href="https://www.youtube.com/channel/UCi-1nnN0eC9nRleXdZA6ncg" rel="noopener noreferrer"&gt;on his YouTube&lt;/a&gt;. He's also hosting a show on &lt;a href="https://cloudnative.tv" rel="noopener noreferrer"&gt;Cloudnative.tv&lt;/a&gt; about Kubernetes certifications called Cert Magic, and he's &lt;a href="https://saiyampathak.gumroad.com/l/cksbook" rel="noopener noreferrer"&gt;written a book&lt;/a&gt; about the scenarios in the CKS exam.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/I4mztvnRCjs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Docker Compose vs Kubernetes Development Tools</title>
      <dc:creator>Rich Burroughs</dc:creator>
      <pubDate>Mon, 12 Jul 2021 22:32:46 +0000</pubDate>
      <link>https://dev.to/loft/docker-compose-vs-kubernetes-development-tools-3n1f</link>
      <guid>https://dev.to/loft/docker-compose-vs-kubernetes-development-tools-3n1f</guid>
      <description>&lt;p&gt;By Kasper Siig&lt;/p&gt;

&lt;p&gt;When getting started with Docker, many developers quickly turn to &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt; to run their applications. Compose offers many advantages, such as having your configuration stored as code, making it easy to maintain and expand upon. Unfortunately, although it &lt;em&gt;is&lt;/em&gt; possible to use Compose with Kubernetes, it's not the recommended approach.&lt;/p&gt;

&lt;p&gt;Devs will often bang their head against the wall trying to make this scenario work when they start using Kubernetes, without knowing that there's a better way. After all, they have become used to Compose and have integrated it deeply into their workflow. It can be hard to let go.&lt;/p&gt;

&lt;p&gt;This article will go over why it's best to leave Compose out of Kubernetes, and give resources to help you with improving your workflow without it. You'll be introduced to tools that will provide you with the same advantages as you would have with Compose traditionally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoiding Docker Compose in Kubernetes
&lt;/h2&gt;

&lt;p&gt;So you've started to use Kubernetes with your project, and you're wondering what to do with all of your work in Docker Compose. Rather than having to abandon all of your work and start completely from scratch, it's possible to use a tool like &lt;a href="https://kompose.io/" rel="noopener noreferrer"&gt;Kompose&lt;/a&gt; as a way of converting your &lt;code&gt;docker-compose.yml&lt;/code&gt; files into Kubernetes manifests. Being familiar with Compose, this can give you great insights into how things are mapped into Kubernetes, and act as a starting point for your research into manifests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf79kcit4hd0o43eykr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf79kcit4hd0o43eykr6.png" alt="The output of running the " width="542" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, when you start moving from the learning phase to the production phase, it's important to think about whether you want to keep Compose in your toolchain at all. Even though tools like Kompose exist to help bring Compose into a Kubernetes environment, it's still not considered best practice. Instead, you should consider switching over to using Kubernetes manifests.&lt;/p&gt;

&lt;p&gt;Using Compose in production can be fine initially, and if your goal is to get simple containers deployed, it's not a big deal. That being said, once your cluster starts maturing and your use cases become more complex, you will find that trying to define everything in a &lt;code&gt;docker-compose.yml&lt;/code&gt; is either tough or impossible.&lt;/p&gt;

&lt;p&gt;You'll likely get to a point where you're spending a significant amount of time developing and maintaining &lt;code&gt;docker-compose.yml&lt;/code&gt; files. So much so that it would have been easier to just start over with Kubernetes manifests. This is an important point to consider when deciding whether to use one tool over another. One may be easier initially but perhaps limits the possibilities in the future, or be harder to work with in complex scenarios. &lt;/p&gt;

&lt;h2&gt;
  
  
  Compose's Consequences and Risks
&lt;/h2&gt;

&lt;p&gt;By using Compose in Kubernetes, you are limited in functionality. While Compose is a robust tool with a rich feature library, there are many things it cannot do. Objects like CRDs, Jobs, and StatefulSets cannot be created with Compose. Networking is possible, but it can quickly become unwieldy to define it in a &lt;code&gt;docker-compose.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;There are some technical downsides to the continuing use of Compose, but you will have to also consider the impact on your team, both current and future. Not many people are using Compose in production, so you'll likely struggle to find a new hire that's able to jump right in. There are also features of Compose that are not typically used, which you'll have to get familiar with to configure Kubernetes.&lt;/p&gt;

&lt;p&gt;If you manage to get your engineers to learn and use Compose efficiently, and you're fine with onboarding new people into the toolchain, you may still run into issues. Since not many teams use Compose in production, it can be tough to find guides and tutorials with examples. Many online resources will only include Kubernetes manifests as examples, and from here two things can happen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One option is that the engineer in the team will understand the tutorial and get everything defined in a &lt;code&gt;.yml&lt;/code&gt; file. This way, you'll continue to purely use Compose, but you'll have to carry the cost of engineering time spent converting the Kubernetes manifest. This also means that your engineers understand manifests well enough to convert them to another format, weakening the argument for using Compose. &lt;/li&gt;
&lt;li&gt;The other option is that the example manifest will be used as a Proof of Concept, but it will end up being used in production because of a deadline or other reasons. Now you have a mix of Compose files and Kubernetes manifests, which can quickly lead to confusion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will have a tough time integrating with other tools on the market since many tools exist to expand upon existing Kubernetes manifests. Some of these tools help in easing deployment, like Helm. Other tools like Skaffold work with your manifests to run your application in Kubernetes as you work on it. You might find workarounds that allow you to use these tools, but you won't find any official documentation on setting them up. You'll have to maintain these workarounds, and it creates more room for error.&lt;/p&gt;

&lt;p&gt;Finally, you run the risk of having different teams using different tools. Developers may want to use Compose as it's more user-friendly on the surface, and they mostly care about getting the application to run and making optimizations through the code. Ops may want to get deeper into the roots of Kubernetes in ways only possible when using native Kubernetes tools. Typically they care about optimizations in the infrastructure, like networking and load balancing. Using Kubernetes manifests won't guarantee different teams using the exact same tools, but they will have the same common ground.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Options
&lt;/h2&gt;

&lt;p&gt;As stated before, there are many tools available to help you work with Kubernetes. Many stick with Compose because it's easy to define containers, networking, and volumes, and that's a fair point. However, tools like &lt;a href="https://skaffold.dev/" rel="noopener noreferrer"&gt;Skaffold&lt;/a&gt;, &lt;a href="https://devspace.sh/" rel="noopener noreferrer"&gt;DevSpace&lt;/a&gt;, and &lt;a href="https://tilt.dev/" rel="noopener noreferrer"&gt;Tilt&lt;/a&gt; exist to make working on code that's meant to run in Kubernetes easier. These tools offer features such as watching your code, automatically building and deploying your application, and much more that's native to Kubernetes. &lt;/p&gt;

&lt;p&gt;These tools can help you transition from a Compose-based approach into something more akin to native Kubernetes. Their sole goal is to make life easier for developers while still using the basis of Kubernetes; manifests. Give them a try and see how they work for you, and whether you can find a way of getting them into your current toolchain. To get started, you can use Kompose as a way of converting your existing &lt;code&gt;docker-compose.yaml&lt;/code&gt; files into Kubernetes manifests. From here, you can either deploy them and get familiar with the deployment process, or you can look into the generated files and try to understand them.&lt;/p&gt;

&lt;p&gt;Whatever tool you choose to go with, the most important thing is that you know why you're using it. Many best practices exist because they're what suits most organizations. However, there will always be outliers, and you may be one of them. You may be in a situation where it indeed does make the most sense to use Compose as your only tool, and that is perfectly acceptable.&lt;/p&gt;

&lt;p&gt;You just need to know why you've chosen to go with the tool that you have, making it possible to reevaluate down the line whether it's still right for you or if you should consider switching to best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;On the surface, it can seem challenging to learn a new tool, and Kubernetes is quite heavy for new users. However, transitioning from Compose to native Kubernetes isn't as complicated as it may seem, and as you've now seen, there are many tools available to assist you with this. Switching to manifests will help you in many ways. Whether you make the switch is up to you, but consider whether it's the right choice and what advantages it can bring to you.&lt;/p&gt;

&lt;p&gt;You can start by converting your &lt;code&gt;docker-compose.yml&lt;/code&gt; to Kubernetes manifests with Kompose. That way you'll be using an application and definition that you're already familiar with instead of starting from scratch with an application you don't know.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@campful?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Campbell&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/train?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
