<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hanif Jetha</title>
    <description>The latest articles on DEV Community by Hanif Jetha (@hjet).</description>
    <link>https://dev.to/hjet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hjet"/>
    <language>en</language>
    <item>
      <title>The DigitalOcean Community Monthly: Redis &amp; kubectl Cheat Sheets, Git for Writing, Hacktoberfest &amp; More</title>
      <dc:creator>Hanif Jetha</dc:creator>
      <pubDate>Tue, 01 Oct 2019 17:16:38 +0000</pubDate>
      <link>https://dev.to/digitalocean/the-digitalocean-community-monthly-redis-kubectl-cheat-sheets-git-for-writing-hacktoberfest-more-3h36</link>
      <guid>https://dev.to/digitalocean/the-digitalocean-community-monthly-redis-kubectl-cheat-sheets-git-for-writing-hacktoberfest-more-3h36</guid>
      <description>&lt;p&gt;Welcome to &lt;strong&gt;The DOCOM Monthly&lt;/strong&gt;, your monthly digest featuring some of the best content published &lt;em&gt;in&lt;/em&gt; and &lt;em&gt;around&lt;/em&gt; the &lt;a href="https://digitalocean.com/community"&gt;DigitalOcean Community&lt;/a&gt; last month. &lt;/p&gt;

&lt;p&gt;I'm Hanif Jetha, a DevOps Technical Writer on the Dev Education team. Today's roundup includes some helpful Redis and kubectl cheat sheets, a guide on using Git to manage writing projects, Hacktoberfest &amp;amp; more!&lt;/p&gt;

&lt;p&gt;Without further ado, here are this month's featured posts:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorial_series/how-to-manage-a-redis-database"&gt;How To Manage a Redis Database&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--heoHImnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/osis2lrkf19c1t782k8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--heoHImnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/osis2lrkf19c1t782k8o.png" alt="Redis Header Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This cheat sheet tutorial series by &lt;a href="https://dev.to/mdrakedo"&gt;Mark Drake&lt;/a&gt; serves as a handy reference for common Redis operations like connecting to a Redis database, managing different data types, and troubleshooting and debugging problems. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-git-to-manage-your-writing-project"&gt;How to Use Git to Manage Your Writing Project&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Version control isn’t just for code. You can also use it to manage any sort of writing project. In this tutorial you’ll learn how to use &lt;a href="https://git-scm.com/"&gt;Git&lt;/a&gt; to manage a small Markdown document. You’ll store an initial version, commit it, make changes, view the difference between those changes, and review the previous version. When you’re done, you’ll have a workflow you can apply to your own writing projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.digitalocean.com/community/cheatsheets/getting-started-with-kubectl-a-kubectl-cheat-sheet"&gt;Getting Started with kubectl: A kubectl Cheat Sheet&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Another cheat-sheet-style tutorial, this time for &lt;a href="https://kubernetes.io/docs/reference/kubectl/overview/"&gt;kubectl&lt;/a&gt;, the Kubernetes command-line interface. It covers common operations like configuring Contexts and Namespaces, rolling out Deployments, and fetching logs from running and terminated Pods. It also shows you how to install kubectl and set up shell autocompletion!&lt;/p&gt;

&lt;h2&gt;
  
  
  Featured Q&amp;amp;A
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/questions/how-can-i-kill-all-mysql-sleeping-queries"&gt;How can I kill all MySQL sleeping queries?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/questions/how-do-i-work-with-count-and-connection-host-using-terraform"&gt;How do I work with "count" and connection.host using Terraform?
&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/questions/running-jenkins-in-docker"&gt;Running Jenkins in Docker
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get involved on the &lt;a href="https://www.digitalocean.com/community/questions"&gt;Community Q&amp;amp;A&lt;/a&gt; - ask questions and help other members by providing answers!&lt;/p&gt;

&lt;h2&gt;
  
  
  DigitalOcean on the Web
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://towardsdatascience.com/@lymenlee"&gt;Michael Li&lt;/a&gt; teaches you how to Dockerize and deploy a machine learning web app in &lt;a href="https://towardsdatascience.com/how-to-deploy-your-machine-learning-web-app-to-digital-ocean-64bd19ce15e2"&gt;How to Deploy Your Machine Learning Web App to DigitalOcean&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;DigitalOcean’s very own &lt;a href="https://twitter.com/erikaheidi?lang=en"&gt;Erika Heidi&lt;/a&gt; demonstrates how to build a command-line PHP application in this multi-part series:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/erikaheidi" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i6eXk8o4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--95Vsc3-S--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/162988/b604f249-a248-4582-80e3-4a781d054e3f.jpeg" alt="erikaheidi image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/erikaheidi/bootstrapping-a-cli-php-application-in-vanilla-php-4ee" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Bootstrapping a CLI PHP application in Vanilla PHP&lt;/h2&gt;
      &lt;h3&gt;Erika Heidi ・ Sep 20 '19 ・ 7 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#showdev&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#php&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cli&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#beginners&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;Learn how to integrate DigitalOcean Kubernetes with GitLab and deploy your apps to a DOKS cluster: &lt;a href="https://medium.com/@ju5t_/kubernetes-gitlab-and-digitalocean-d73a3dc14d11"&gt;Kubernetes, GitLab and DigitalOcean&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, DigitalOcean Developer Advocate &lt;a href="https://twitter.com/kamaln7"&gt;Kamal Nasser&lt;/a&gt; shows you how to offload Docker image builds to a remote server:&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/kamaln7" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dxTecfc_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--NbofFmkf--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/90310/4e4e1ed2-c344-4d70-8ef5-6188a94e6687.jpg" alt="kamaln7 image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/digitalocean/how-to-use-a-remote-docker-server-to-speed-up-your-workflow-35f0" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;How to Use a Remote Docker Server to Speed Up Your Workflow&lt;/h2&gt;
      &lt;h3&gt;Kamal Nasser ・ Sep 13 '19 ・ 5 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Community IRL
&lt;/h2&gt;

&lt;p&gt;DigitalOcean’s &lt;a href="https://twitter.com/lisaironcutter"&gt;Lisa Tagliaferri&lt;/a&gt; will be at the &lt;a href="https://ghc.anitab.org/"&gt;Grace Hopper Celebration&lt;/a&gt; in Orlando next week, delivering a talk on building inclusive and diverse open source communities. If you’re attending Grace Hopper, be sure to check out her talk!&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;
      &lt;div class="ltag__twitter-tweet__media"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tROIL5fl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/media/EFaW6LRX4AM7pB4.jpg" alt="unknown tweet media content"&gt;
      &lt;/div&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--jLLyr_bI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1172266573534498816/ZdiGUzaT_normal.jpg" alt="Lisa Tagliaferri profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Lisa Tagliaferri
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        &lt;a class="comment-mentioned-user" href="https://dev.to/lisaironcutter"&gt;@lisaironcutter&lt;/a&gt;

      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      Thrilled to be speaking at &lt;a href="https://twitter.com/hashtag/GHC19"&gt;#GHC19&lt;/a&gt; next week 🎉&lt;br&gt;&lt;br&gt;I'll be sharing tips for maintainers and contributors on expanding participation and promoting inclusivity &amp;amp; diversity in open source 👩🏿‍💻👩🏻‍💻👩🏽‍💻👩🏼‍💻 
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      18:50 PM - 26 Sep 2019
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=1177294281507266560" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=1177294281507266560" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      6
      &lt;a href="https://twitter.com/intent/like?tweet_id=1177294281507266560" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      17
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;&lt;a href="https://twitter.com/eddiezane"&gt;Eddie Zaneski&lt;/a&gt;, a DigitalOcean Developer Relations manager, gave a talk at &lt;a href="https://about.gitlab.com/events/commit/"&gt;GitLab Commit&lt;/a&gt; on transforming a fresh Kubernetes cluster into a CI/CD bastion for GitLab. Check out his talk here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/-shvwiBwFVI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Eddie will also be in attendance for the &lt;a href="https://eventbrite.com/e/hacktoberfest-2019-official-kick-off-celebration-tickets-71109054095?aff=eac2"&gt;Hacktoberfest NYC Kickoff with DEV&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hacktoberfest
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p5MZlFDS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/kzb9m1d2ywr522vo25q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p5MZlFDS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/kzb9m1d2ywr522vo25q7.png" alt="Hacktoberfest Banner"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hacktoberfest.digitalocean.com/"&gt;Hacktoberfest&lt;/a&gt; is a month-long celebration of open source software (OSS) put together by DigitalOcean and DEV. Contribute four pull requests to open source projects on GitHub between October 1st and 31st, and receive a limited-edition Hacktoberfest T-shirt! You can register and learn more at the &lt;a href="https://hacktoberfest.digitalocean.com/"&gt;Official Hacktoberfest site&lt;/a&gt;, which also includes a list of &lt;a href="https://hacktoberfest.digitalocean.com/events"&gt;IRL Hacktoberfest events&lt;/a&gt; you can attend.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Happy hacking, and see you all next month!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>digitalocean</category>
      <category>news</category>
      <category>docom</category>
      <category>docommonthly</category>
    </item>
    <item>
      <title>How to Set Up a Prometheus, Grafana and Alertmanager Monitoring Stack on DigitalOcean Kubernetes</title>
      <dc:creator>Hanif Jetha</dc:creator>
      <pubDate>Mon, 29 Jul 2019 19:40:34 +0000</pubDate>
      <link>https://dev.to/digitalocean/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes-268j</link>
      <guid>https://dev.to/digitalocean/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes-268j</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Along with tracing and logging, monitoring and alerting are essential components of a Kubernetes observability stack. Setting up monitoring for your DigitalOcean Kubernetes cluster allows you to track your resource usage and analyze and debug application errors.&lt;/p&gt;

&lt;p&gt;A monitoring system usually consists of a time-series database that houses metric data and a visualization layer. In addition, an alerting layer creates and manages alerts, handing them off to integrations and external services as necessary. Finally, one or more components generate or expose the metric data that will be stored, visualized, and processed for alerts by the stack.&lt;/p&gt;

&lt;p&gt;One popular monitoring solution is the open-source &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;, &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;, and &lt;a href="https://github.com/prometheus/alertmanager" rel="noopener noreferrer"&gt;Alertmanager&lt;/a&gt; stack, deployed alongside &lt;a href="https://github.com/kubernetes/kube-state-metrics" rel="noopener noreferrer"&gt;kube-state-metrics&lt;/a&gt; and &lt;a href="https://github.com/prometheus/node_exporter" rel="noopener noreferrer"&gt;node_exporter&lt;/a&gt; to expose cluster-level Kubernetes object metrics as well as machine-level metrics like CPU and memory usage.&lt;/p&gt;

&lt;p&gt;Rolling out this monitoring stack on a Kubernetes cluster requires configuring individual components, manifests, Prometheus metrics, and Grafana dashboards, which can take some time. The &lt;a href="https://github.com/do-community/doks-monitoring" rel="noopener noreferrer"&gt;DigitalOcean Kubernetes Cluster Monitoring Quickstart&lt;/a&gt;, released by the DigitalOcean Community Developer Education team, contains fully defined manifests for a Prometheus-Grafana-Alertmanager cluster monitoring stack, as well as a set of preconfigured alerts and Grafana dashboards. It can help you get up and running quickly, and forms a solid foundation from which to build your observability stack.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll deploy this preconfigured stack on DigitalOcean Kubernetes, access the Prometheus, Grafana, and Alertmanager interfaces, and describe how to customize it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, you'll need a &lt;a href="https://www.digitalocean.com/docs/kubernetes/quickstart/" rel="noopener noreferrer"&gt;DigitalOcean Kubernetes cluster&lt;/a&gt; available to you, and the following tools installed in your local development environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The &lt;code&gt;kubectl&lt;/code&gt; command-line interface installed on your local machine and configured to connect to your cluster. You can read more about installing and configuring &lt;code&gt;kubectl&lt;/code&gt; &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;in its official documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  The &lt;a href="https://git-scm.com/book/en/v2/Getting-Started-Installing-Git" rel="noopener noreferrer"&gt;git&lt;/a&gt; version control system installed on your local machine. To learn how to install git on Ubuntu 18.04, consult &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-git-on-ubuntu-18-04" rel="noopener noreferrer"&gt;How To Install Git on Ubuntu 18.04&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  The Coreutils &lt;a href="https://www.gnu.org/software/coreutils/manual/html_node/base64-invocation.html" rel="noopener noreferrer"&gt;base64&lt;/a&gt; tool installed on your local machine. If you're using a Linux machine, this will most likely already be installed. If you're using OS X, you can use &lt;code&gt;openssl base64&lt;/code&gt;, which comes installed by default.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The Cluster Monitoring Quickstart has only been tested on DigitalOcean Kubernetes clusters. To use the Quickstart with other Kubernetes clusters, some modification to the manifest files may be necessary. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 — Cloning the GitHub Repository and Configuring Environment Variables
&lt;/h2&gt;

&lt;p&gt;To start, clone the DigitalOcean Kubernetes Cluster Monitoring &lt;a href="https://github.com/do-community/doks-monitoring" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; onto your local machine using git:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:do-community/doks-monitoring.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, navigate into the repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd doks-monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the following directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
LICENSE
README.md
changes.txt
manifest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;manifest&lt;/code&gt; directory contains Kubernetes manifests for all of the monitoring stack components, including &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noopener noreferrer"&gt;Service Accounts&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployments&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noopener noreferrer"&gt;StatefulSets&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noopener noreferrer"&gt;ConfigMaps&lt;/a&gt;, etc. To learn more about these manifest files and how to configure them, skip ahead to &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes#step-6-%E2%80%94-configuring-the-monitoring-stack-optional" rel="noopener noreferrer"&gt;Configuring the Monitoring Stack&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you just want to get things up and running, begin by setting the &lt;code&gt;APP_INSTANCE_NAME&lt;/code&gt; and &lt;code&gt;NAMESPACE&lt;/code&gt; environment variables, which will be used to configure a unique name for the stack's components and configure the &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noopener noreferrer"&gt;Namespace&lt;/a&gt; into which the stack will be deployed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export APP_INSTANCE_NAME=sammy-cluster-monitoring
export NAMESPACE=default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this tutorial, we set &lt;code&gt;APP_INSTANCE_NAME&lt;/code&gt; to &lt;code&gt;sammy-cluster-monitoring&lt;/code&gt;, which will prepend all of the monitoring stack Kubernetes object names. You should substitute in a unique descriptive prefix for your monitoring stack. We also set the Namespace to &lt;code&gt;default&lt;/code&gt;. If you’d like to deploy the monitoring stack to a Namespace &lt;strong&gt;other&lt;/strong&gt; than &lt;code&gt;default&lt;/code&gt;, ensure that you first create it in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace "$NAMESPACE"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
namespace/sammy created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, the &lt;code&gt;NAMESPACE&lt;/code&gt; environment variable was set to &lt;code&gt;sammy&lt;/code&gt;. Throughout the rest of the tutorial we'll assume that &lt;code&gt;NAMESPACE&lt;/code&gt; has been set to &lt;code&gt;default&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now, use the &lt;code&gt;base64&lt;/code&gt; command to base64-encode a secure Grafana password. Be sure to substitute a password of your choosing for &lt;code&gt;&amp;lt;span class="highlight"&amp;gt;your_grafana_password&amp;lt;/span&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export GRAFANA_GENERATED_PASSWORD="$(echo -n 'your_grafana_password' | base64)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using macOS, you can substitute the &lt;code&gt;openssl base64&lt;/code&gt; command which comes installed by default.&lt;/p&gt;

&lt;p&gt;At this point, you've grabbed the stack's Kubernetes manifests and configured the required environment variables, so you're now ready to substitute the configured variables into the Kubernetes manifest files and create the stack in your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 — Creating the Monitoring Stack
&lt;/h2&gt;

&lt;p&gt;The DigitalOcean Kubernetes Monitoring Quickstart repo contains manifests for the following monitoring, scraping, and visualization components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Prometheus&lt;/strong&gt; is a time series database and monitoring tool that works by polling metrics endpoints and scraping and processing the data exposed by these endpoints. It allows you to query this data using &lt;a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="noopener noreferrer"&gt;PromQL&lt;/a&gt;, a time series data query language. Prometheus will be deployed into the cluster as a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noopener noreferrer"&gt;StatefulSet&lt;/a&gt; with 2 replicas that uses &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noopener noreferrer"&gt;Persistent Volumes&lt;/a&gt; with DigitalOcean &lt;a href="https://www.digitalocean.com/products/block-storage/" rel="noopener noreferrer"&gt;Block Storage&lt;/a&gt;. In addition, a preconfigured set of Prometheus Alerts, Rules, and Jobs will be stored as a &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noopener noreferrer"&gt;ConfigMap&lt;/a&gt;. To learn more about these, skip ahead to the &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes#prometheus" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; section of Configuring the Monitoring Stack.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Alertmanager&lt;/strong&gt;, usually deployed alongside Prometheus, forms the alerting layer of the stack, handling alerts generated by Prometheus and deduplicating, grouping, and routing them to integrations like email or &lt;a href="https://www.pagerduty.com/" rel="noopener noreferrer"&gt;PagerDuty&lt;/a&gt;. Alertmanager will be installed as a StatefulSet with 2 replicas. To learn more about Alertmanager, consult &lt;a href="https://prometheus.io/docs/practices/alerting/" rel="noopener noreferrer"&gt;Alerting&lt;/a&gt; from the Prometheus docs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Grafana&lt;/strong&gt; is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data. Grafana will be installed as a StatefulSet with one replica. In addition, a preconfigured set of Dashboards generated by &lt;a href="https://github.com/kubernetes-monitoring/kubernetes-mixin" rel="noopener noreferrer"&gt;kubernetes-mixin&lt;/a&gt; will be stored as a ConfigMap.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;kube-state-metrics&lt;/strong&gt; is an add-on agent that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects like Deployments and Pods. These metrics are served as plaintext on HTTP endpoints and consumed by Prometheus. kube-state-metrics will be installed as an auto-scalable &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; with one replica.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;node-exporter&lt;/strong&gt;, a Prometheus exporter that runs on cluster nodes and provides OS and hardware metrics like CPU and memory usage to Prometheus. These metrics are also served as plaintext on HTTP endpoints and consumed by Prometheus. node-exporter will be installed as a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noopener noreferrer"&gt;DaemonSet&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, along with scraping metrics generated by node-exporter, kube-state-metrics, and the other components listed above, Prometheus will be configured to scrape metrics from the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  kube-apiserver, the &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="noopener noreferrer"&gt;Kubernetes API server&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/concepts/overview/components/#kubelet" rel="noopener noreferrer"&gt;kubelet&lt;/a&gt;, the primary node agent that interacts with kube-apiserver to manage Pods and containers on a node.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#cadvisor" rel="noopener noreferrer"&gt;cAdvisor&lt;/a&gt;, a node agent that discovers running containers and collects their CPU, memory, filesystem, and network usage metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To learn more about configuring these components and Prometheus scraping jobs, skip ahead to &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes#step-6-%E2%80%94-configuring-the-monitoring-stack-optional" rel="noopener noreferrer"&gt;Configuring the Monitoring Stack&lt;/a&gt;. We'll now substitute the environment variables defined in the previous step into the repo's manifest files, and concatenate the individual manifests into a single master file.&lt;/p&gt;

&lt;p&gt;Begin by using &lt;code&gt;awk&lt;/code&gt; and &lt;code&gt;envsubst&lt;/code&gt; to fill in the &lt;code&gt;APP_INSTANCE_NAME&lt;/code&gt;, &lt;code&gt;NAMESPACE&lt;/code&gt;, and &lt;code&gt;GRAFANA_GENERATED_PASSWORD&lt;/code&gt; variables in the repo's manifest files. After substituting in the variable values, the files will be combined and saved into a master manifest file called &lt;code&gt;&amp;lt;span class="highlight"&amp;gt;sammy-cluster-monitoring&amp;lt;/span&amp;gt;_manifest.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;awk 'FNR==1 {print "---"}{print}' manifest/* \
 | envsubst '$APP_INSTANCE_NAME $NAMESPACE $GRAFANA_GENERATED_PASSWORD' \
 &amp;gt; "${APP_INSTANCE_NAME}_manifest.yaml"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should consider storing this file in version control so that you can track changes to the monitoring stack and roll back to previous versions. If you do this, be sure to scrub the &lt;code&gt;admin-password&lt;/code&gt; variable from the file so that you don't check your Grafana password into version control.&lt;/p&gt;

&lt;p&gt;Now that you've generated the master manifest file, use &lt;code&gt;kubectl apply -f&lt;/code&gt; to apply the manifest and create the stack in the Namespace you configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f "${APP_INSTANCE_NAME}_manifest.yaml" --namespace "${NAMESPACE}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
serviceaccount/alertmanager created
configmap/sammy-cluster-monitoring-alertmanager-config created
service/sammy-cluster-monitoring-alertmanager-operated created
service/sammy-cluster-monitoring-alertmanager created

. . .

clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/sammy-cluster-monitoring-prometheus-config created
service/sammy-cluster-monitoring-prometheus created
statefulset.apps/sammy-cluster-monitoring-prometheus created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can track the stack’s deployment progress using &lt;code&gt;kubectl get all&lt;/code&gt;. Once all of the stack components are &lt;code&gt;RUNNING&lt;/code&gt;, you can access the preconfigured Grafana dashboards through the Grafana web interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 — Accessing Grafana and Exploring Metrics Data
&lt;/h2&gt;

&lt;p&gt;The Grafana Service manifest exposes Grafana as a &lt;code&gt;ClusterIP&lt;/code&gt; Service, which means that it's only accessible via a cluster-internal IP address. To access Grafana outside of your Kubernetes cluster, you can either use &lt;code&gt;kubectl patch&lt;/code&gt; to update the Service in-place to a public-facing type like &lt;code&gt;NodePort&lt;/code&gt; or &lt;code&gt;LoadBalancer&lt;/code&gt;, or &lt;code&gt;kubectl port-forward&lt;/code&gt; to forward a local port to a Grafana Pod port. In this tutorial we'll forward ports, so you can skip ahead to &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes#forwarding-a-local-port-to-access-the-grafana-service" rel="noopener noreferrer"&gt;Forwarding a Local Port to Access the Grafana Service&lt;/a&gt;. The following section on exposing Grafana externally is included for reference purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exposing the Grafana Service using a Load Balancer (optional)
&lt;/h3&gt;

&lt;p&gt;If you'd like to create a DigitalOcean Load Balancer for Grafana with an external public IP, use &lt;code&gt;kubectl patch&lt;/code&gt; to update the existing Grafana Service in-place to the &lt;code&gt;LoadBalancer&lt;/code&gt; Service type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch svc "$APP_INSTANCE_NAME-grafana" \
  --namespace "$NAMESPACE" \
  -p '{"spec": {"type": "LoadBalancer"}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The kubectl &lt;code&gt;patch&lt;/code&gt; command allows you to update Kubernetes objects in-place to make changes without having to re-deploy the objects. You can also modify the master manifest file directly, adding a &lt;code&gt;type: LoadBalancer&lt;/code&gt; parameter to the &lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/grafana-service.yaml#L9" rel="noopener noreferrer"&gt;Grafana Service spec&lt;/a&gt;. To learn more about &lt;code&gt;kubectl patch&lt;/code&gt; and Kubernetes Service types, you can consult the &lt;a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="noopener noreferrer"&gt;Update API Objects in Place Using kubectl patch&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Services&lt;/a&gt; resources in the official Kubernetes docs.&lt;/p&gt;

&lt;p&gt;After running the above command, you should see the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
service/sammy-cluster-monitoring-grafana patched
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It may take several minutes to create the Load Balancer and assign it a public IP. You can track its progress using the following command with the &lt;code&gt;-w&lt;/code&gt; flag to watch for changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get service "$APP_INSTANCE_NAME-grafana" -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the DigitalOcean Load Balancer has been created and assigned an external IP address, you can fetch its external IP using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SERVICE_IP=$(kubectl get svc $APP_INSTANCE_NAME-grafana \
  --namespace $NAMESPACE \
  --output jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "http://${SERVICE_IP}/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now access the Grafana UI by navigating to &lt;code&gt;http://&amp;lt;span class="highlight"&amp;gt;SERVICE_IP&amp;lt;/span&amp;gt;/&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Forwarding a Local Port to Access the Grafana Service
&lt;/h3&gt;

&lt;p&gt;If you don't want to expose the Grafana Service externally, you can also forward local port &lt;code&gt;3000&lt;/code&gt; into the cluster directly to a Grafana Pod using &lt;code&gt;kubectl port-forward&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward --namespace ${NAMESPACE} ${APP_INSTANCE_NAME}-grafana-0 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
Forwarding from 127.0.0.1:3000 -&amp;gt; 3000
Forwarding from [::1]:3000 -&amp;gt; 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will forward local port &lt;code&gt;3000&lt;/code&gt; to &lt;code&gt;containerPort&lt;/code&gt; &lt;code&gt;3000&lt;/code&gt; of the Grafana Pod &lt;code&gt;&amp;lt;span class="highlight"&amp;gt;sammy-cluster-monitoring&amp;lt;/span&amp;gt;-grafana-0&lt;/code&gt;. To learn more about forwarding ports into a Kubernetes cluster, consult &lt;a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noopener noreferrer"&gt;Use Port Forwarding to Access Applications in a Cluster&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Visit &lt;code&gt;http://localhost:3000&lt;/code&gt; in your web browser. You should see the following Grafana login page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_login.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_login.png" alt="Grafana Login Page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To log in, use the default username &lt;code&gt;admin&lt;/code&gt; (if you haven't modified the &lt;code&gt;admin-user&lt;/code&gt; parameter), and the password you configured in Step 1.&lt;/p&gt;

&lt;p&gt;You'll be brought to the following &lt;strong&gt;Home Dashboard&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_home.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_home.png" alt="Grafana Home Page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the left-hand navigation bar, select the &lt;strong&gt;Dashboards&lt;/strong&gt; button, then click on &lt;strong&gt;Manage&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_dashboard.png" alt="Grafana Dashboard Tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll be brought to the following dashboard management interface, which lists the dashboards configured in the &lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/dashboards-configmap.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;dashboards-configmap.yaml&lt;/code&gt;&lt;/a&gt; manifest:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_dashboard_list.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_dashboard_list.png" alt="Grafana Dashboard List"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These dashboards are generated by &lt;code&gt;kubernetes-mixin&lt;/code&gt;, an open-source project that allows you to create a standardized set of cluster monitoring Grafana dashboards and Prometheus alerts. To learn more, consult the &lt;a href="https://github.com/kubernetes-monitoring/kubernetes-mixin" rel="noopener noreferrer"&gt;kubernetes-mixin GitHub repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Click in to the &lt;strong&gt;Kubernetes / Nodes&lt;/strong&gt; dashboard, which visualizes CPU, memory, disk, and network usage for a given node:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_nodes_dash.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_helm_monitoring%2Fgrafana_nodes_dash.png" alt="Grafana Nodes Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Describing how to use these dashboards is outside of this tutorial’s scope, but you can consult the following resources to learn more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  To learn more about the USE method for analyzing a system's performance, you can consult Brendan Gregg's &lt;a href="http://www.brendangregg.com/usemethod.html" rel="noopener noreferrer"&gt;The Utilization Saturation and Errors (USE) Method&lt;/a&gt; page.&lt;/li&gt;
&lt;li&gt;  Google's &lt;a href="https://landing.google.com/sre/books/" rel="noopener noreferrer"&gt;SRE Book&lt;/a&gt; is another helpful resource, in particular Chapter 6: &lt;a href="https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/" rel="noopener noreferrer"&gt;Monitoring Distributed Systems&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  To learn how to build your own Grafana dashboards, check out Grafana's &lt;a href="https://grafana.com/docs/guides/getting_started/" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt; page.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next step, we'll follow a similar process to connect to and explore the Prometheus monitoring system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4 — Accessing Prometheus and Alertmanager
&lt;/h2&gt;

&lt;p&gt;To connect to the Prometheus Pods, we can use &lt;code&gt;kubectl port-forward&lt;/code&gt; to forward a local port. If you’re done exploring Grafana, you can close the port-forward tunnel by hitting &lt;code&gt;CTRL-C&lt;/code&gt;. Alternatively, you can open a new shell and create a new port-forward connection.&lt;/p&gt;

&lt;p&gt;Begin by listing running Pods in the &lt;code&gt;default&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod -n default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the following Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
sammy-cluster-monitoring-alertmanager-0                      1/1     Running   0          17m
sammy-cluster-monitoring-alertmanager-1                      1/1     Running   0          15m
sammy-cluster-monitoring-grafana-0                           1/1     Running   0          16m
sammy-cluster-monitoring-kube-state-metrics-d68bb884-gmgxt   2/2     Running   0          16m
sammy-cluster-monitoring-node-exporter-7hvb7                 1/1     Running   0          16m
sammy-cluster-monitoring-node-exporter-c2rvj                 1/1     Running   0          16m
sammy-cluster-monitoring-node-exporter-w8j74                 1/1     Running   0          16m
sammy-cluster-monitoring-prometheus-0                        1/1     Running   0          16m
sammy-cluster-monitoring-prometheus-1                        1/1     Running   0          16m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are going to forward local port &lt;code&gt;9090&lt;/code&gt; to port &lt;code&gt;9090&lt;/code&gt; of the &lt;code&gt;&amp;lt;span class="highlight"&amp;gt;sammy-cluster-monitoring&amp;lt;/span&amp;gt;-prometheus-0&lt;/code&gt; Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward --namespace ${NAMESPACE} sammy-cluster-monitoring-prometheus-0 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
Forwarding from 127.0.0.1:9090 -&amp;gt; 9090
Forwarding from [::1]:9090 -&amp;gt; 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates that local port &lt;code&gt;9090&lt;/code&gt; is being forwarded successfully to the Prometheus Pod.&lt;/p&gt;

&lt;p&gt;Visit &lt;code&gt;http://localhost:9090&lt;/code&gt; in your web browser. You should see the following Prometheus &lt;strong&gt;Graph&lt;/strong&gt; page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_monitoring_quickstart%2Fprometheus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_monitoring_quickstart%2Fprometheus.png" alt="Prometheus Graph Page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here you can use PromQL, the Prometheus query language, to select and aggregate time series metrics stored in its database. To learn more about PromQL, consult &lt;a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="noopener noreferrer"&gt;Querying Prometheus&lt;/a&gt; from the official Prometheus docs.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Expression&lt;/strong&gt; field, type &lt;code&gt;kubelet_node_name&lt;/code&gt; and hit &lt;strong&gt;Execute&lt;/strong&gt;. You should see a list of time series with the metric &lt;code&gt;kubelet_node_name&lt;/code&gt; that reports the Nodes in your Kubernetes cluster. You can see which node generated the metric and which job scraped the metric in the metric labels:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_monitoring_quickstart%2Fprometheus_results.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_monitoring_quickstart%2Fprometheus_results.png" alt="Prometheus Query Results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, in the top navigation bar, click on &lt;strong&gt;Status&lt;/strong&gt; and then &lt;strong&gt;Targets&lt;/strong&gt; to see the list of targets Prometheus has been configured to scrape. You should see a list of targets corresponding to the list of monitoring endpoints described at the beginning of &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes#step-2-%E2%80%94-creating-the-monitoring-stack" rel="noopener noreferrer"&gt;Step 2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about Prometheus and how to query your cluster metrics, consult the official &lt;a href="https://prometheus.io/docs/introduction/overview/" rel="noopener noreferrer"&gt;Prometheus docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To connect to Alertmanager, which manages Alerts generated by Prometheus, we'll follow a similar process to what we used to connect to Prometheus. . In general, you can explore Alertmanager Alerts by clicking into &lt;strong&gt;Alerts&lt;/strong&gt; in the Prometheus top navigation bar.&lt;/p&gt;

&lt;p&gt;To connect to the Alertmanager Pods, we will once again use &lt;code&gt;kubectl port-forward&lt;/code&gt; to forward a local port. If you’re done exploring Prometheus, you can close the port-forward tunnel by hitting &lt;code&gt;CTRL-C&lt;/code&gt;or open a new shell to create a new connection. .&lt;/p&gt;

&lt;p&gt;We are going to forward local port &lt;code&gt;9093&lt;/code&gt; to port &lt;code&gt;9093&lt;/code&gt; of the &lt;code&gt;&amp;lt;span class="highlight"&amp;gt;sammy-cluster-monitoring&amp;lt;/span&amp;gt;-alertmanager-0&lt;/code&gt; Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward --namespace ${NAMESPACE} sammy-cluster-monitoring-alertmanager-0 9093
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
Forwarding from 127.0.0.1:9093 -&amp;gt; 9093
Forwarding from [::1]:9093 -&amp;gt; 9093
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates that local port &lt;code&gt;9093&lt;/code&gt; is being forwarded successfully to an Alertmanager Pod.&lt;/p&gt;

&lt;p&gt;Visit &lt;code&gt;http://localhost:9093&lt;/code&gt; in your web browser. You should see the following Alertmanager &lt;strong&gt;Alerts&lt;/strong&gt; page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_monitoring_quickstart%2Falertmanager.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.digitalocean.com%2Farticles%2Fdoks_monitoring_quickstart%2Falertmanager.png" alt="Alertmanager Alerts Page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, you can explore firing alerts and optionally silencing them. To learn more about Alertmanager, consult the &lt;a href="https://prometheus.io/docs/alerting/alertmanager/" rel="noopener noreferrer"&gt;official Alertmanager documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the next step, you'll learn how to optionally configure and scale some of the monitoring stack components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6 — Configuring the Monitoring Stack (optional)
&lt;/h2&gt;

&lt;p&gt;The manifests included in the DigitalOcean Kubernetes Cluster Monitoring Quickstart repository can be modified to use different container images, different numbers of Pod replicas, different ports, and customized configuration files.&lt;/p&gt;

&lt;p&gt;In this step, we'll provide a high-level overview of each manifest’s purpose, and then demonstrate how to scale Prometheus up to 3 replicas by modifying the master manifest file.&lt;/p&gt;

&lt;p&gt;To begin, navigate into the &lt;code&gt;manifests&lt;/code&gt; subdirectory in the repo, and list the directory’s contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd manifest
ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
alertmanager-0serviceaccount.yaml
alertmanager-configmap.yaml
alertmanager-operated-service.yaml
alertmanager-service.yaml
. . .
node-exporter-ds.yaml
prometheus-0serviceaccount.yaml
prometheus-configmap.yaml
prometheus-service.yaml
prometheus-statefulset.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you'll find manifests for the different monitoring stack components. To learn more about specific parameters in the manifests, click into the links and consult the comments included throughout the YAML files:&lt;/p&gt;

&lt;h3&gt;
  
  
  Alertmanager
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/alertmanager-0serviceaccount.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;alertmanager-0serviceaccount.yaml&lt;/code&gt;&lt;/a&gt;: The Alertmanager Service Account, used to give the Alertmanager Pods a Kubernetes identity. To learn more about Service Accounts, consult &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noopener noreferrer"&gt;Configure Service Accounts for Pods&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/alertmanager-configmap.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;alertmanager-configmap.yaml&lt;/code&gt;&lt;/a&gt;: A ConfigMap containing a minimal Alertmanager configuration file, called &lt;code&gt;alertmanager.yml&lt;/code&gt;. Configuring Alertmanager is beyond the scope of this tutorial, but you can learn more by consulting the &lt;a href="https://prometheus.io/docs/alerting/configuration/" rel="noopener noreferrer"&gt;Configuration&lt;/a&gt; section of the Alertmanager documentation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/alertmanager-operated-service.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;alertmanager-operated-service.yaml&lt;/code&gt;&lt;/a&gt;: The Alertmanager &lt;code&gt;mesh&lt;/code&gt; Service, which is used for routing requests between Alertmanager Pods in the current 2-replica high-availability configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/alertmanager-service.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;alertmanager-service.yaml&lt;/code&gt;&lt;/a&gt;: The Alertmanager &lt;code&gt;web&lt;/code&gt; Service, which is used to access the Alertmanager web interface, which you may have done in the previous step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/alertmanager-statefulset.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;alertmanager-statefulset.yaml&lt;/code&gt;&lt;/a&gt;: The Alertmanager StatefulSet, configured with 2 replicas.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Grafana
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/dashboards-configmap.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;dashboards-configmap.yaml&lt;/code&gt;&lt;/a&gt;: A ConfigMap containing the preconfigured &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-json" rel="noopener noreferrer"&gt;JSON&lt;/a&gt; Grafana monitoring dashboards. Generating a new set of dashboards and alerts from scratch goes beyond the scope of this tutorial, but to learn more you can consult the &lt;a href="https://github.com/kubernetes-monitoring/kubernetes-mixin" rel="noopener noreferrer"&gt;kubernetes-mixin GitHub repo&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/grafana-0serviceaccount.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;grafana-0serviceaccount.yaml&lt;/code&gt;&lt;/a&gt;: The Grafana Service Account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/grafana-configmap.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;grafana-configmap.yaml&lt;/code&gt;&lt;/a&gt;: A ConfigMap containing a default set of minimal Grafana configuration files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/grafana-secret.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;grafana-secret.yaml&lt;/code&gt;&lt;/a&gt;: A Kubernetes Secret containing the Grafana admin user and password. To learn more about Kubernetes Secrets, consult &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;Secrets&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/grafana-service.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;grafana-service.yaml&lt;/code&gt;&lt;/a&gt;: The manifest defining the Grafana Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/grafana-statefulset.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;grafana-statefulset.yaml&lt;/code&gt;&lt;/a&gt;: The Grafana StatefulSet, configured with 1 replica, which is not scalable. Scaling Grafana is beyond the scope of this tutorial. To learn how to create a highly available Grafana set up, you can consult &lt;a href="https://grafana.com/docs/tutorials/ha_setup/" rel="noopener noreferrer"&gt;How to setup Grafana for High Availability&lt;/a&gt; from the official Grafana docs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  kube-state-metrics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/kube-state-metrics-0serviceaccount.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;kube-state-metrics-0serviceaccount.yaml&lt;/code&gt;&lt;/a&gt;: The kube-state-metrics Service Account and ClusterRole. To learn more about ClusterRoles, consult &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="noopener noreferrer"&gt;Role and ClusterRole&lt;/a&gt; from the Kubernetes docs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/kube-state-metrics-deployment.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;kube-state-metrics-deployment.yaml&lt;/code&gt;&lt;/a&gt;: The main kube-state-metrics Deployment manifest, configured with 1 dynamically scalable replica using &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/addon-resizer" rel="noopener noreferrer"&gt;&lt;code&gt;addon-resizer&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/kube-state-metrics-service.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;kube-state-metrics-service.yaml&lt;/code&gt;&lt;/a&gt;: The Service exposing the &lt;code&gt;kube-state-metrics&lt;/code&gt; Deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  node-exporter
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/node-exporter-0serviceaccount.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;node-exporter-0serviceaccount.yaml&lt;/code&gt;&lt;/a&gt;: The node-exporter Service Account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/node-exporter-ds.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;node-exporter-ds.yaml&lt;/code&gt;&lt;/a&gt;: The node-exporter DaemonSet manifest. Since node-exporter is a DaemonSet, a node-exporter Pod runs on each Node in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/prometheus-0serviceaccount.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;prometheus-0serviceaccount.yaml&lt;/code&gt;&lt;/a&gt;: The Prometheus Service Account, ClusterRole and ClusterRoleBinding.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/prometheus-configmap.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;prometheus-configmap.yaml&lt;/code&gt;&lt;/a&gt;: A ConfigMap that contains three configuration files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;alerts.yaml&lt;/code&gt;: Contains a preconfigured set of alerts generated by &lt;code&gt;kubernetes-mixin&lt;/code&gt; (which was also used to generate the Grafana dashboards). To learn more about configuring alerting rules, consult &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/" rel="noopener noreferrer"&gt;Alerting Rules&lt;/a&gt; from the Prometheus docs.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;prometheus.yaml&lt;/code&gt;: Prometheus's main configuration file. Prometheus has been preconfigured to scrape all the components listed at the beginning of &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-prometheus-grafana-and-alertmanager-monitoring-stack-on-digitalocean-kubernetes#step-2-%E2%80%94-creating-the-monitoring-stack" rel="noopener noreferrer"&gt;Step 2&lt;/a&gt;. Configuring Prometheus goes beyond the scope of this article, but to learn more, you can consult &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/" rel="noopener noreferrer"&gt;Configuration&lt;/a&gt; from the official Prometheus docs.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;rules.yaml&lt;/code&gt;: A set of Prometheus recording rules that enable Prometheus to compute frequently needed or computationally expensive expressions, and save their results as a new set of time series. These are also generated by &lt;code&gt;kubernetes-mixin&lt;/code&gt;, and configuring them goes beyond the scope of this article. To learn more, you can consult &lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules" rel="noopener noreferrer"&gt;Recording Rules&lt;/a&gt; from the official Prometheus documentation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/prometheus-service.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;prometheus-service.yaml&lt;/code&gt;&lt;/a&gt;: The Service that exposes the Prometheus StatefulSet.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/prometheus-statefulset.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;prometheus-statefulset.yaml&lt;/code&gt;&lt;/a&gt;: The Prometheus StatefulSet, configured with 2 replicas. This parameter can be scaled depending on your needs.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example: Scaling Prometheus
&lt;/h3&gt;

&lt;p&gt;To demonstrate how to modify the monitoring stack, we'll scale the number of Prometheus replicas from 2 to 3.&lt;/p&gt;

&lt;p&gt;Open the &lt;code&gt;&amp;lt;span class="highlight"&amp;gt;sammy-cluster-monitoring&amp;lt;/span&amp;gt;_manifest.yaml&lt;/code&gt; master manifest file using your editor of choice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano sammy-cluster-monitoring_manifest.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Scroll down to the Prometheus StatefulSet section of the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
. . .
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: sammy-cluster-monitoring-prometheus
  labels: &amp;amp;Labels
    k8s-app: prometheus
    app.kubernetes.io/name: sammy-cluster-monitoring
    app.kubernetes.io/component: prometheus
spec:
  serviceName: "sammy-cluster-monitoring-prometheus"
  replicas: 2
  podManagementPolicy: "Parallel"
  updateStrategy:
    type: "RollingUpdate"
  selector:
    matchLabels: *Labels
  template:
    metadata:
      labels: *Labels
    spec:
. . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the number of replicas from 2 to 3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output
. . .
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: sammy-cluster-monitoring-prometheus
  labels: &amp;amp;Labels
    k8s-app: prometheus
    app.kubernetes.io/name: sammy-cluster-monitoring
    app.kubernetes.io/component: prometheus
spec:
  serviceName: "sammy-cluster-monitoring-prometheus"
  replicas: 3
  podManagementPolicy: "Parallel"
  updateStrategy:
    type: "RollingUpdate"
  selector:
    matchLabels: *Labels
  template:
    metadata:
      labels: *Labels
    spec:
. . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you're done, save and close the file.&lt;/p&gt;

&lt;p&gt;Apply the changes using &lt;code&gt;kubectl apply -f&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f sammy-cluster-monitoring_manifest.yaml --namespace default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can track progress using &lt;code&gt;kubectl get pods&lt;/code&gt;. Using this same technique, you can update many of the Kubernetes parameters and much of the configuration for this observability stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you installed a Prometheus, Grafana, and Alertmanager monitoring stack into your DigitalOcean Kubernetes cluster with a standard set of dashboards, Prometheus rules, and alerts.&lt;/p&gt;

&lt;p&gt;You may also choose to deploy this monitoring stack using the &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; Kubernetes package manager. To learn more, consult &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-digitalocean-kubernetes-cluster-monitoring-with-helm-and-prometheus-operator" rel="noopener noreferrer"&gt;How to Set Up DigitalOcean Kubernetes Cluster Monitoring with Helm and Prometheus&lt;/a&gt;. One additional way to get this stack up and running is to use the DigitalOcean Marketplace &lt;a href="https://marketplace.digitalocean.com/apps/kubernetes-monitoring-stack-beta" rel="noopener noreferrer"&gt;Kubernetes Monitoring Stack solution&lt;/a&gt;, currently in beta.&lt;/p&gt;

&lt;p&gt;The DigitalOcean Kubernetes Cluster Monitoring Quickstart repository is heavily based on and modified from Google Cloud Platform’s &lt;a href="https://github.com/GoogleCloudPlatform/click-to-deploy/tree/master/k8s/prometheus" rel="noopener noreferrer"&gt;click-to-deploy Prometheus solution&lt;/a&gt;. A full manifest of modifications and changes from the original repository can be found in the Quickstart repo’s &lt;a href="https://github.com/do-community/doks-monitoring/blob/master/changes.md" rel="noopener noreferrer"&gt;&lt;code&gt;changes.md&lt;/code&gt; file&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;a href="http://creativecommons.org/licenses/by-nc-sa/4.0/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9dopc8jjbnew6l2xc2j.png" alt="CC 4.0 License"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This work is licensed under a &lt;a href="http://creativecommons.org/licenses/by-nc-sa/4.0/" rel="noopener noreferrer"&gt;Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>prometheus</category>
      <category>grafana</category>
    </item>
    <item>
      <title>Becoming a Technical Writer: The Paths 3 Engineers Took to their First Community Tutorial</title>
      <dc:creator>Hanif Jetha</dc:creator>
      <pubDate>Mon, 25 Mar 2019 16:02:57 +0000</pubDate>
      <link>https://dev.to/digitalocean/becoming-a-technical-writer-the-paths-3-engineers-took-to-their-first-community-tutorial-25no</link>
      <guid>https://dev.to/digitalocean/becoming-a-technical-writer-the-paths-3-engineers-took-to-their-first-community-tutorial-25no</guid>
      <description>&lt;p&gt;Regular readers of DigitalOcean Community tutorials (like &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-18-04"&gt;How To Secure Apache with Let's Encrypt on Ubuntu 18.04&lt;/a&gt; or &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-spin-up-a-hadoop-cluster-with-digitalocean-droplets"&gt;How To Spin Up a Hadoop Cluster with DigitalOcean Droplets&lt;/a&gt;) may have noticed that beyond setup instructions, tutorials often contain insights, tips, and pain points surfaced from real-world production scenarios. This is because many of them are written by DevOps engineers, sysadmins, and software developers eager to share solutions and workarounds to problems faced while rolling out software and systems on the job.&lt;/p&gt;

&lt;p&gt;Through tutorials, DigitalOcean writers contribute their in-the-trenches experience to the wider developer community, and in doing so solidify their own understanding of technical concepts. Some authors work full-time on the DigitalOcean Community writing team, while others contribute tutorials as part of the &lt;a href="https://www.digitalocean.com/write-for-donations/"&gt;Write for DOnations&lt;/a&gt; program, which matches the author's payout with a charitable donation to a tech-focused nonprofit. All Community contributors share a common spirit of “giving back” to readers through teaching, whether these readers are seasoned engineering managers, or students wading in the waters of &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-18-04"&gt;setting up an Nginx server&lt;/a&gt; or &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-jupyter-notebook-for-python-3"&gt;Jupyter Notebook&lt;/a&gt; for the first time.&lt;/p&gt;

&lt;p&gt;DigitalOcean tutorials can broadly be categorized as either conceptual or procedural. &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes"&gt;An Introduction to Kubernetes&lt;/a&gt;, for example, is more conceptual in nature, as the author provides the reader with an overview of a piece of software or DevOps concept and boils it down to a set of digestible core ideas. &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes"&gt;How To Set Up an Elasticsearch, Fluentd, and Kibana (EFK) Logging Stack on Kubernetes&lt;/a&gt;, on the other hand, is a procedural walkthrough to support a developer setting up their infrastructure. The tutorial brings the reader through the installation and configuration of one or several technologies step-by-step, often providing valuable insight and context along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Becoming a Technical Writer
&lt;/h2&gt;

&lt;p&gt;There is no “one” path to becoming a DigitalOcean tutorial writer. &lt;a href="https://www.digitalocean.com/community/users/manicas"&gt;Mitchell Anicas&lt;/a&gt;, formerly a senior technical writer on the Community team, and now a senior software engineer on the Billing team, began his career as a systems administrator at the University of Hawaii. After relocating to New York, he leveraged his years of experience administering systems and automating their configuration and deployment to begin writing Linux and infrastructure tutorials full-time.&lt;/p&gt;

&lt;p&gt;“While working as a sysadmin, many of the tutorials I referenced weren’t complete, or weren’t really that high quality. It was a lot of someone writing blog posts saying something like ‘this is how this worked for me,’” he says. “Since I had been on the other side, I had a lot of empathy for readers.” Although he’d never written a tutorial before, with the help of other Community writers (all tutorials are peer-edited and tech-tested by another member of the Community team), he began publishing articles like &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04"&gt;How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 14.04&lt;/a&gt; and &lt;a href="https://www.digitalocean.com/community/tutorials/5-common-server-setups-for-your-web-application"&gt;5 Common Server Setups For Your Web Application&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the other hand, &lt;a href="https://www.digitalocean.com/community/users/erikaheidi"&gt;Erika Heidi&lt;/a&gt;, a software engineer and writer based out of Amsterdam, had always been a writer. “I’ve always enjoyed writing, since I was very young,” she says. “I figured out that I could use blogging as a platform for documenting technical things like setting up servers and fixing common Linux problems, both as a future reference for myself and also as a way to share what I was learning.”&lt;/p&gt;

&lt;p&gt;As a Community author, she’s contributed many tutorials, ranging from the procedural &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-16-04"&gt;How To Secure Apache with Let's Encrypt on Ubuntu 16.04&lt;/a&gt;, to conceptual articles like &lt;a href="https://www.digitalocean.com/community/tutorials/what-is-high-availability"&gt;What is High Availability&lt;/a&gt; and &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-configuration-management"&gt;An Introduction to Configuration Management&lt;/a&gt; that draw from her extensive experience as a DevOps engineer. Like Mitchell, her technical experience endowed her with a keen sense for the problems her peers were facing and solutions she could provide to fill the gap.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/users/jeremylevanmorris"&gt;Jeremy Morris&lt;/a&gt; followed a similar path to publishing his first DigitalOcean tutorial: “Throughout college as a computer science student, I would occasionally write blog posts about some of the things I learned at my internships, as a way of gaining a deeper understanding of the topics and sharing my knowledge with others,” he recalls. One of his professors, Lisa Tagliaferri, now managing the team of in-house Community writers at DigitalOcean, recommended he leverage his newly gained Python and Django experience and write a tutorial series on building a blog. This eventually led to him publishing Django tutorials like &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-django-and-set-up-a-development-environment-on-ubuntu-16-04"&gt;How To Install Django and Set Up a Development Environment on Ubuntu 16.04&lt;/a&gt;, &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-create-a-django-app-and-connect-it-to-a-database"&gt;How To Create a Django App and Connect it to a Database&lt;/a&gt; and &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-create-django-models"&gt;How To Create Django Models&lt;/a&gt;, all topics he had become familiar with through his professional work.&lt;/p&gt;

&lt;p&gt;Mitchell, Erika, and Jeremy all became writers at different stages of their careers in tech. Their common desire to help their peers through sharing hard-won solutions to challenging DevOps problems led them to publish their first DigitalOcean tutorials. While writing and educating others presented them with the need to develop distinct communication skills, they found that the challenge of describing solutions in a clear and accessible manner was well worth developing alongside their engineering experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Write?
&lt;/h2&gt;

&lt;p&gt;Technical writing can be a powerful complement to engineering work, requiring the author to understand and process concepts at a deeper level than may be required to complete day-to-day tasks. Erika, in her drive to deeply understand the technologies she works with as a DevOps engineer, finds that writing tutorials helps her truly gain familiarity with new ideas: “There’s a lot of stuff in engineering I know how to do because I’ve done it multiple times before and it simply works, but if I had to explain how it works, I couldn’t,” she notes. “Writing helps me ‘untangle’ my thoughts, because I have to explain and organize those thoughts into logical steps.”&lt;/p&gt;

&lt;p&gt;It can also be an incredibly rewarding pursuit. By publishing open-source tech tutorials, Erika feels that she is helping others who have similarly lent a hand throughout her learning path: “I believe the fulfillment comes from the feeling that I’m sharing something that might be useful for others. I’m searching for tutorials and how to do stuff all the time on the internet, and this is one way I can give back to the community.”&lt;/p&gt;

&lt;p&gt;Mitchell shares this sense of being able to contribute his experience and knowledge through tutorials: “It’s cool to be able to help thousands of people through writing, especially writing tutorials for DigitalOcean, whose tutorials are respected and recognized by the developer community at large. I’ll be at a conference and run into someone and they’ll say something like ‘Oh yeah I used that tutorial of yours to set up an ELK stack, thanks so much!’”&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking the Plunge and Diving into the Technical Writing
&lt;/h2&gt;

&lt;p&gt;So how do you get started writing your first DevOps, software development, or systems tutorial? By jumping into the deep end, of course! Through the &lt;a href="https://www.digitalocean.com/write-for-donations/"&gt;Write for DOnations&lt;/a&gt; program you can submit a short writing sample (for inspiration, take a look at &lt;a href="https://www.digitalocean.com/community/tutorials/suggested-topics-for-tutorials"&gt;this list&lt;/a&gt; of suggested topics) and work with our Community editors to have your article edited, tech-tested, proofread, and guided to publication. In addition, you'll be paid for your work, and DigitalOcean will match your payout with a donation to a &lt;a href="https://www.digitalocean.com/community/tutorials/write-for-donations-faq#which-charities-and-nonprofits-will-my-writing-support"&gt;tech-focused charity&lt;/a&gt; of your choice. To date, DigitalOcean has donated over $13,000 through external-author submissions and the Write for DOnations program!&lt;/p&gt;

&lt;p&gt;In addition, the Community team frequently has openings for new writers, editors, and developer advocates to educate, curate, and produce some of the highest quality software-focused tutorials on the web. If you’re a DevOps or software engineer and have some experience writing documentation or other content, consult our &lt;a href="https://www.digitalocean.com/careers/"&gt;Careers&lt;/a&gt; page for an up-to-date list of open full-time Community positions.&lt;/p&gt;

&lt;p&gt;Although he may have been referring to something other than the pains of backing up and replicating a large distributed MySQL database, let Hemingway’s words guide you as you set sail on your technical writing voyage: “write hard and clear about what hurts.”&lt;/p&gt;

</description>
      <category>technicalwriting</category>
      <category>blogging</category>
    </item>
    <item>
      <title>Modernizing Applications for Kubernetes</title>
      <dc:creator>Hanif Jetha</dc:creator>
      <pubDate>Mon, 22 Oct 2018 16:13:14 +0000</pubDate>
      <link>https://dev.to/digitalocean/modernizing-applications-for-kubernetes-1hon</link>
      <guid>https://dev.to/digitalocean/modernizing-applications-for-kubernetes-1hon</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Modern stateless applications are built and designed to run in software containers like Docker, and be managed by container clusters like Kubernetes. They are developed using &lt;a href="https://github.com/cncf/toc/blob/master/DEFINITION.md"&gt;Cloud Native&lt;/a&gt; and &lt;a href="https://12factor.net/"&gt;Twelve Factor&lt;/a&gt; principles and patterns, to minimize manual intervention and maximize portability and redundancy. Migrating virtual-machine or bare metal-based applications into containers (known as "containerizing") and deploying them inside of clusters often involves significant shifts in how these apps are built, packaged, and delivered.&lt;/p&gt;

&lt;p&gt;Building on &lt;a href="https://www.digitalocean.com/community/tutorials/architecting-applications-for-kubernetes"&gt;Architecting Applications for Kubernetes&lt;/a&gt;, in this conceptual guide, we'll discuss high-level steps for modernizing your applications, with the end goal of running and managing them in a Kubernetes cluster. Although you can run stateful applications like databases on Kubernetes, this guide focuses on migrating and modernizing stateless applications, with persistent data offloaded to an external data store. Kubernetes provides advanced functionality for efficiently managing and scaling stateless applications, and we'll explore the application and infrastructure changes necessary for running scalable, observable, and portable apps on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the Application for Migration
&lt;/h2&gt;

&lt;p&gt;Before containerizing your application or writing Kubernetes Pod and Deployment configuration files, you should implement application-level changes to maximize your app's portability and observability in Kubernetes. Kubernetes is a highly automated environment that can automatically deploy and restart failing application containers, so it's important to build in the appropriate application logic to communicate with the container orchestrator and allow it to automatically scale your app as necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extract Configuration Data
&lt;/h3&gt;

&lt;p&gt;One of the first application-level changes to implement is extracting application configuration from application code. Configuration consists of any information that varies across deployments and environments, like service endpoints, database addresses, credentials, and various parameters and options. For example, if you have two environments, say &lt;code&gt;staging&lt;/code&gt; and &lt;code&gt;production&lt;/code&gt;, and each contains a separate database, your application should not have the database endpoint and credentials explicitly declared in the code, but stored in a separate location, either as variables in the running environment, a local file, or external key-value store, from which the values are read into the app.&lt;/p&gt;

&lt;p&gt;Hardcoding these parameters into your code poses a security risk as this config data often consists of sensitive information, which you then check in to your version control system. It also increases complexity as you now have to maintain multiple versions of your application, each consisting of the same core application logic, but varying slightly in configuration. As applications and their configuration data grow, hardcoding config into app code quickly becomes unwieldy.&lt;/p&gt;

&lt;p&gt;By extracting configuration values from your application code, and instead ingesting them from the running environment or local files, your app becomes a generic, portable package that can be deployed into any environment, provided you supply it with accompanying configuration data. Container software like Docker and cluster software like Kubernetes have been designed around this paradigm, building in features for managing configuration data and injecting it into application containers. These features will be covered in more detail in the &lt;a href="https://www.digitalocean.com/community/tutorials/modernizing-applications-for-kubernetes#inject-configuration"&gt;Containerizing&lt;/a&gt; and &lt;a href="https://www.digitalocean.com/community/tutorials/modernizing-applications-for-kubernetes#injecting-configuration-data-with-kubernetes"&gt;Kubernetes&lt;/a&gt; sections.&lt;/p&gt;

&lt;p&gt;Here’s a quick example demonstrating how to externalize two config values &lt;code&gt;DB_HOST&lt;/code&gt; and &lt;code&gt;DB_USER&lt;/code&gt; from a simple Python &lt;a href="http://flask.pocoo.org/"&gt;Flask&lt;/a&gt; app’s code. We'll make them available in the app’s running environment as env vars, from which the app will read them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;hardcoded_config.py&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;

&lt;span class="n"&gt;DB_HOST&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'mydb.mycloud.com'&lt;/span&gt;
&lt;span class="n"&gt;DB_USER&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'sammy'&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;print_config&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'DB_HOST: {} -- DB_USER: {}'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DB_HOST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DB_USER&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Running this simple app (consult the &lt;a href="http://flask.pocoo.org/docs/1.0/quickstart/"&gt;Flask Quickstart&lt;/a&gt; to learn how) and visiting its web endpoint will display a page containing these two config values.&lt;/p&gt;

&lt;p&gt;Now, here’s the same example with the config values externalized to the app’s running environment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;env_config.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;os&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;

&lt;span class="n"&gt;DB_HOST&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'APP_DB_HOST'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;DB_USER&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'APP_DB_USER'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;print_config&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'DB_HOST: {} -- DB_USER: {}'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DB_HOST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DB_USER&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Before running the app, we set the necessary config variables in the local environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
export APP_DB_HOST=mydb.mycloud.com

export APP_DB_USER=sammy

flask run

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The displayed web page should contain the same text as in the first example, but the app’s config can now be modified independently of the application code. You can use a similar approach to read in config parameters from a local file.&lt;/p&gt;

&lt;p&gt;In the next section we’ll discuss moving application state outside of containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Offload Application State
&lt;/h3&gt;

&lt;p&gt;Cloud Native applications run in containers, and are dynamically orchestrated by cluster software like Kubernetes or Docker Swarm. A given app or service can be load balanced across multiple replicas, and any individual app container should be able to fail, with minimal or no disruption of service for clients. To enable this horizontal, redundant scaling, applications must be designed in a stateless fashion. This means that they respond to client requests without storing persistent client and application data locally, and at any point in time if the running app container is destroyed or restarted, critical data is not lost.&lt;/p&gt;

&lt;p&gt;For example, if you are running an address book application and your app adds, removes and modifies contacts from an address book, the address book data store should be an external database or other data store, and the only data kept in container memory should be short-term in nature, and disposable without critical loss of information. Data that persists across user visits like sessions should also be moved to external data stores like Redis. Wherever possible, you should offload any state from your app to services like managed databases or caches.&lt;/p&gt;

&lt;p&gt;For stateful applications that require a persistent data store (like a replicated MySQL database), Kubernetes builds in features for attaching persistent block storage volumes to containers and Pods. To ensure that a Pod can maintain state and access the same persistent volume after a restart, the StatefulSet workload must be used. StatefulSets are ideal for deploying databases and other long-running data stores to Kubernetes.&lt;/p&gt;

&lt;p&gt;Stateless containers enable maximum portability and full use of available cloud resources, allowing the Kubernetes scheduler to quickly scale your app up and down and launch Pods wherever resources are available. If you don’t require the stability and ordering guarantees provided by the StatefulSet workload, you should use the Deployment workload to manage and scale and your applications.&lt;/p&gt;

&lt;p&gt;To learn more about the design and architecture of stateless, Cloud Native microservices, consult our &lt;a href="http://assets.digitalocean.com/white-papers/running-digitalocean-kubernetes.pdf"&gt;Kubernetes White Paper&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implement Health Checks
&lt;/h3&gt;

&lt;p&gt;In the Kubernetes model, the cluster control plane can be relied on to repair a broken application or service. It does this by checking the health of application Pods, and restarting or rescheduling unhealthy or unresponsive containers. By default, if your application container is running, Kubernetes sees your Pod as "healthy." In many cases this is a reliable indicator for the health of a running application. However, if your application is deadlocked and not performing any meaningful work, the app process and container will continue to run indefinitely, and by default Kubernetes will keep the stalled container alive.&lt;/p&gt;

&lt;p&gt;To properly communicate application health to the Kubernetes control plane, you should implement custom application health checks that indicate when an application is both running and ready to receive traffic. The first type of health check is called a &lt;strong&gt;readiness probe&lt;/strong&gt; , and lets Kubernetes know when your application is ready to receive traffic. The second type of check is called a &lt;strong&gt;liveness probe&lt;/strong&gt; , and lets Kubernetes know when your application is healthy and running. The Kubelet Node agent can perform these probes on running Pods using 3 different methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP: The Kubelet probe performs an HTTP GET request against an endpoint (like &lt;code&gt;/health&lt;/code&gt;), and succeeds if the response status is between 200 and 399&lt;/li&gt;
&lt;li&gt;Container Command: The Kubelet probe executes a command inside of the running container. If the exit code is 0, then the probe succeeds.&lt;/li&gt;
&lt;li&gt;TCP: The Kubelet probe attempts to connect to your container on a specified port. If it can establish a TCP connection, then the probe succeeds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should choose the appropriate method depending on the running application(s), programming language, and framework. The readiness and liveness probes can both use the same probe method and perform the same check, but the inclusion of a readiness probe will ensure that the Pod doesn't receive traffic until the probe begins succeeding.&lt;/p&gt;

&lt;p&gt;When planning and thinking about containerizing your application and running it on Kubernetes, you should allocate planning time for defining what "healthy" and "ready" mean for your particular application, and development time for implementing and testing the endpoints and/or check commands.&lt;/p&gt;

&lt;p&gt;Here’s a minimal health endpoint for the Flask example referenced above:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;env_config.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;  
&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;print_config&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'DB_HOST: {} -- DB_USER: {}'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DB_HOST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DB_USER&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;

&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/health'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;return_ok&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;'Ok!'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;A Kubernetes liveness probe that checks this path would then look something like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pod_spec.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
  &lt;span class="n"&gt;livenessProbe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="n"&gt;httpGet&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;health&lt;/span&gt;
        &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
      &lt;span class="n"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
      &lt;span class="n"&gt;periodSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;initialDelaySeconds&lt;/code&gt; field specifies that Kubernetes (specifically the Node Kubelet) should probe the &lt;code&gt;/health&lt;/code&gt; endpoint after waiting 5 seconds, and &lt;code&gt;periodSeconds&lt;/code&gt; tells the Kubelet to probe &lt;code&gt;/health&lt;/code&gt; every 2 seconds.&lt;/p&gt;

&lt;p&gt;To learn more about liveness and readiness probes, consult the &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/"&gt;Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instrument Code for Logging and Monitoring
&lt;/h3&gt;

&lt;p&gt;When running your containerized application in an environment like Kubernetes, it's important to publish telemetry and logging data to monitor and debug your application's performance. Building in features to publish performance metrics like response duration and error rates will help you monitor your application and alert you when your application is unhealthy.&lt;/p&gt;

&lt;p&gt;One tool you can use to monitor your services is &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;, an open-source systems monitoring and alerting toolkit, hosted by the Cloud Native Computing Foundation (CNCF). Prometheus provides several client libraries for instrumenting your code with various metric types to count events and their durations. For example, if you're using the Flask Python framework, you can use the Prometheus &lt;a href="https://github.com/prometheus/client_python"&gt;Python client&lt;/a&gt; to add decorators to your request processing functions to track the time spent processing requests. These metrics can then be scraped by Prometheus at an HTTP endpoint like &lt;code&gt;/metrics&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A helpful method to use when designing your app's instrumentation is the RED method. It consists of the following three key request metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rate: The number of requests received by your application&lt;/li&gt;
&lt;li&gt;Errors: The number of errors emitted by your application&lt;/li&gt;
&lt;li&gt;Duration: The amount of time it takes your application to serve a response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This minimal set of metrics should give you enough data to alert on when your application's performance degrades. Implementing this instrumentation along with the health checks discussed above will allow you to quickly detect and recover from a failing application.&lt;/p&gt;

&lt;p&gt;To learn more about signals to measure when monitoring your applications, consult &lt;a href="https://landing.google.com/sre/book/chapters/monitoring-distributed-systems.html#xref_monitoring_golden-signals"&gt;Monitoring Distributed Systems&lt;/a&gt; from the Google Site Reliability Engineering book.&lt;/p&gt;

&lt;p&gt;In addition to thinking about and designing features for publishing telemetry data, you should also plan how your application will log in a distributed cluster-based environment. You should ideally remove hardcoded configuration references to local log files and log directories, and instead log directly to stdout and stderr. You should treat logs as a continuous event stream, or sequence of time-ordered events. This output stream will then get captured by the container enveloping your application, from which it can be forwarded to a logging layer like the EFK (Elasticsearch, Fluentd, and Kibana) stack. Kubernetes provides a lot of flexibility in designing your logging architecture, which we'll explore in more detail below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build Administration Logic into API
&lt;/h3&gt;

&lt;p&gt;Once your application is containerized and up and running in a cluster environment like Kubernetes, you may no longer have shell access to the container running your app. If you've implemented adequate health checking, logging, and monitoring, you can quickly be alerted on, and debug production issues, but taking action beyond restarting and redeploying containers may be difficult. For quick operational and maintenance fixes like flushing queues or clearing a cache, you should implement the appropriate API endpoints so that you can perform these operations without having to restart containers or &lt;code&gt;exec&lt;/code&gt; into running containers and execute series of commands. Containers should be treated as immutable objects, and manual administration should be avoided in a production environment. If you must perform one-off administrative tasks, like clearing caches, you should expose this functionality via the API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;In these sections we’ve discussed application-level changes you may wish to implement before containerizing your application and moving it to Kubernetes. For a more in-depth walkthrough on building Cloud Native apps, consult &lt;a href="https://www.digitalocean.com/community/tutorials/architecting-applications-for-kubernetes"&gt;Architecting Applications for Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’ll now discuss some considerations to keep in mind when building containers for your apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerizing Your Application
&lt;/h2&gt;

&lt;p&gt;Now that you've implemented app logic to maximize its portability and observability in a cloud-based environment, it's time to package your app inside of a container. For the purposes of this guide, we'll use Docker containers, but you should use whichever container implementation best suits your production needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Explicitly Declare Dependencies
&lt;/h3&gt;

&lt;p&gt;Before creating a Dockerfile for your application, one of the first steps is taking stock of the software and operating system dependencies your application needs to run correctly. Dockerfiles allow you to explicitly version every piece of software installed into the image, and you should take advantage of this feature by explicitly declaring the parent image, software library, and programming language versions.&lt;/p&gt;

&lt;p&gt;Avoid &lt;code&gt;latest&lt;/code&gt; tags and unversioned packages as much as possible, as these can shift, potentially breaking your application. You may wish to create a private registry or private mirror of a public registry to exert more control over image versioning and to prevent upstream changes from unintentionally breaking your image builds.&lt;/p&gt;

&lt;p&gt;To learn more about setting up a private image registry, consult &lt;a href="https://docs.docker.com/registry/deploying/"&gt;Deploy a Registry Server&lt;/a&gt; from the Docker official documentation and the &lt;a href="https://www.digitalocean.com/community/tutorials/modernizing-applications-for-kubernetes#publish-image-to-a-registry"&gt;Registries&lt;/a&gt; section below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep Image Sizes Small
&lt;/h3&gt;

&lt;p&gt;When deploying and pulling container images, large images can significantly slow things down and add to your bandwidth costs. Packaging a minimal set of tools and application files into an image provides several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce image sizes&lt;/li&gt;
&lt;li&gt;Speed up image builds&lt;/li&gt;
&lt;li&gt;Reduce container start lag &lt;/li&gt;
&lt;li&gt;Speed up image transfer times&lt;/li&gt;
&lt;li&gt;Improve security by reducing attack surface &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some steps you can consider when building your images:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a minimal base OS image like &lt;code&gt;alpine&lt;/code&gt; or build from &lt;code&gt;scratch&lt;/code&gt; instead of a fully featured OS like &lt;code&gt;ubuntu&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Clean up unnecessary files and artifacts after installing software&lt;/li&gt;
&lt;li&gt;Use separate "build" and "runtime" containers to keep production application containers small&lt;/li&gt;
&lt;li&gt;Ignore unnecessary build artifacts and files when copying in large directories&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a full guide on optimizing Docker containers, including many illustrative examples, consult &lt;a href="https://www.digitalocean.com/community/tutorials/building-optimized-containers-for-kubernetes"&gt;Building Optimized Containers for Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inject Configuration
&lt;/h3&gt;

&lt;p&gt;Docker provides several helpful features for injecting configuration data into your app's running environment.&lt;/p&gt;

&lt;p&gt;One option for doing this is specifying environment variables and their values in the Dockerfile using the &lt;code&gt;ENV&lt;/code&gt; statement, so that configuration data is built-in to images:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
ENV MYSQL_USER=my_db_user
...

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Your app can then parse these values from its running environment and configure its settings appropriately.&lt;/p&gt;

&lt;p&gt;You can also pass in environment variables as parameters when starting a container using &lt;code&gt;docker run&lt;/code&gt; and the &lt;code&gt;-e&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -e MYSQL_USER='my_db_user' IMAGE[:TAG] 

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, you can use an env file, containing a list of environment variables and their values. To do this, create the file and use the &lt;code&gt;--env-file&lt;/code&gt; parameter to pass it in to the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --env-file var_list IMAGE[:TAG]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you're modernizing your application to run it using a cluster manager like Kubernetes, you should further externalize your config from the image, and manage configuration using Kubernetes' built-in &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/"&gt;ConfigMap&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/"&gt;Secrets&lt;/a&gt; objects. This allows you to separate configuration from image manifests, so that you can manage and version it separately from your application. To learn how to externalize configuration using ConfigMaps and Secrets, consult the &lt;a href="https://www.digitalocean.com/community/tutorials/modernizing-applications-for-kubernetes#injecting-configuration-data-with-kubernetes"&gt;ConfigMaps and Secrets section&lt;/a&gt; below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Publish Image to a Registry
&lt;/h3&gt;

&lt;p&gt;Once you've built your application images, to make them available to Kubernetes, you should upload them to a container image registry. Public registries like &lt;a href="https://hub.docker.com"&gt;Docker Hub&lt;/a&gt; host the latest Docker images for popular open source projects like &lt;a href="https://hub.docker.com/_/node/"&gt;Node.js&lt;/a&gt; and &lt;a href="https://hub.docker.com/_/nginx/"&gt;nginx&lt;/a&gt;. Private registries allow you publish your internal application images, making them available to developers and infrastructure, but not the wider world.&lt;/p&gt;

&lt;p&gt;You can deploy a private registry using your existing infrastructure (e.g. on top of cloud object storage), or optionally use one of several Docker registry products like &lt;a href="https://quay.io/"&gt;Quay.io&lt;/a&gt; or paid Docker Hub plans. These registries can integrate with hosted version control services like GitHub so that when a Dockerfile is updated and pushed, the registry service will automatically pull the new Dockerfile, build the container image, and make the updated image available to your services.&lt;/p&gt;

&lt;p&gt;To exert more control over the building and testing of your container images and their tagging and publishing, you can implement a continuous integration (CI) pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implement a Build Pipeline
&lt;/h3&gt;

&lt;p&gt;Building, testing, publishing and deploying your images into production manually can be error-prone and does not scale well. To manage builds and continuously publish containers containing your latest code changes to your image registry, you should use a build pipeline.&lt;/p&gt;

&lt;p&gt;Most build pipelines perform the following core functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Watch source code repositories for changes&lt;/li&gt;
&lt;li&gt;Run smoke and unit tests on modified code&lt;/li&gt;
&lt;li&gt;Build container images containing modified code&lt;/li&gt;
&lt;li&gt;Run further integration tests using built container images&lt;/li&gt;
&lt;li&gt;If tests pass, tag and publish images to registry&lt;/li&gt;
&lt;li&gt;(Optional, in continuous deployment setups) Update Kubernetes Deployments and roll out images to staging/production clusters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many paid continuous integration products that have built-in integrations with popular version control services like GitHub and image registries like Docker Hub. An alternative to these products is &lt;a href="https://jenkins.io/"&gt;Jenkins&lt;/a&gt;, a free and open-source build automation server that can be configured to perform all of the functions described above. To learn how to set up a Jenkins continuous integration pipeline, consult &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-continuous-integration-pipelines-in-jenkins-on-ubuntu-16-04"&gt;How To Set Up Continuous Integration Pipelines in Jenkins on Ubuntu 16.04&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implement Container Logging and Monitoring
&lt;/h3&gt;

&lt;p&gt;When working with containers, it's important to think about the logging infrastructure you will use to manage and store logs for all your running and stopped containers. There are multiple container-level patterns you can use for logging, and also multiple Kubernetes-level patterns.&lt;/p&gt;

&lt;p&gt;In Kubernetes, by default containers use the &lt;code&gt;json-file&lt;/code&gt; Docker &lt;a href="https://docs.docker.com/config/containers/logging/configure/"&gt;logging driver&lt;/a&gt;, which captures the stdout and stderr streams and writes them to JSON files on the Node where the container is running. Sometimes logging directly to stderr and stdout may not be enough for your application container, and you may want to pair the app container with a logging &lt;em&gt;sidecar&lt;/em&gt; container in a Kubernetes Pod. This sidecar container can then pick up logs from the filesystem, a local socket, or the systemd journal, granting you a little more flexibility than simply using the stderr and stdout streams. This container can also do some processing and then stream enriched logs to stdout/stderr, or directly to a logging backend. To learn more about Kubernetes logging patterns, consult the Kubernetes logging and monitoring &lt;a href="https://www.digitalocean.com/community/tutorials/modernizing-applications-for-kubernetes#logging-and-monitoring"&gt;section&lt;/a&gt; of this tutorial.&lt;/p&gt;

&lt;p&gt;How your application logs at the container level will depend on its complexity. For simple, single-purpose microservices, logging directly to stdout/stderr and letting Kubernetes pick up these streams is the recommended approach, as you can then leverage the &lt;code&gt;kubectl logs&lt;/code&gt; command to access log streams from your Kubernetes-deployed containers.&lt;/p&gt;

&lt;p&gt;Similar to logging, you should begin thinking about monitoring in a container and cluster-based environment. Docker provides the helpful &lt;code&gt;docker stats&lt;/code&gt; command for grabbing standard metrics like CPU and memory usage for running containers on the host, and exposes even more metrics through the &lt;a href="https://docs.docker.com/develop/sdk/"&gt;Remote REST API&lt;/a&gt;. Additionally, the open-source tool &lt;a href="https://github.com/google/cadvisor"&gt;cAdvisor&lt;/a&gt; (installed on Kubernetes Nodes by default) provides more advanced functionality like historical metric collection, metric data export, and a helpful web UI for sorting through the data.&lt;/p&gt;

&lt;p&gt;However, in a multi-node, multi-container production environment, more complex metrics stacks like &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; and &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt; may help organize and monitor your containers' performance data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;In these sections, we briefly discussed some best practices for building containers, setting up a CI/CD pipeline and image registry, as well as some considerations for increasing observability into your containers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To learn more about optimizing containers for Kubernetes, consult &lt;a href="https://www.digitalocean.com/community/tutorials/building-optimized-containers-for-kubernetes"&gt;Building Optimized Containers for Kubernetes&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;To learn more about CI/CD, consult &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-continuous-integration-delivery-and-deployment"&gt;An Introduction to Continuous Integration, Delivery, and Deployment&lt;/a&gt; and &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-ci-cd-best-practices"&gt;An Introduction to CI/CD Best Practices&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next section, we’ll explore Kubernetes features that allow you to run and scale your containerized app in a cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying on Kubernetes
&lt;/h2&gt;

&lt;p&gt;At this point, you’ve containerized your app and implemented logic to maximize its portability and observability in Cloud Native environments. We’ll now explore Kubernetes features that provide simple interfaces for managing and scaling your apps in a Kubernetes cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Write Deployment and Pod Configuration Files
&lt;/h3&gt;

&lt;p&gt;Once you've containerized your application and published it to a registry, you can now deploy it into a Kubernetes cluster using the Pod workload. The smallest deployable unit in a Kubernetes cluster is not a container but a Pod. Pods typically consist of an application container (like a containerized Flask web app), or an app container and any “sidecar” containers that perform some helper function like monitoring or logging. Containers in a Pod share storage resources, a network namespace, and port space. They can communicate with each other using &lt;code&gt;localhost&lt;/code&gt; and can share data using mounted volumes. Addtionally, the Pod workload allows you to define &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/"&gt;Init Containers&lt;/a&gt; that run setup scripts or utilities before the main app container begins running.&lt;/p&gt;

&lt;p&gt;Pods are typically rolled out using Deployments, which are Controllers defined by YAML files that declare a particular desired state. For example, an application state could be running three replicas of the Flask web app container and exposing port 8080. Once created, the control plane gradually brings the actual state of the cluster to match the desired state declared in the Deployment by scheduling containers onto Nodes as required. To scale the number of application replicas running in the cluster, say from 3 up to 5, you update the &lt;code&gt;replicas&lt;/code&gt; field of the Deployment configuration file, and then &lt;code&gt;kubectl apply&lt;/code&gt; the new configuration file. Using these configuration files, scaling and deployment operations can all be tracked and versioned using your existing source control services and integrations.&lt;/p&gt;

&lt;p&gt;Here’s a sample Kubernetes Deployment configuration file for a Flask app:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;flask_deployment.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: flask-app
  labels:
    app: flask-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: flask-app
  template:
    metadata:
      labels:
        app: flask-app
    spec:
      containers:
      - name: flask
        image: sammy/flask_app:1.0
        ports:
        - containerPort: 8080

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This Deployment launches 3 Pods that run a container called &lt;code&gt;flask&lt;/code&gt; using the &lt;code&gt;sammy/flask_app&lt;/code&gt; image (version &lt;code&gt;1.0&lt;/code&gt;) with port &lt;code&gt;8080&lt;/code&gt; open. The Deployment is called &lt;code&gt;flask-app&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about Kubernetes Pods and Deployments, consult the &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/"&gt;Pods&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"&gt;Deployments&lt;/a&gt; sections of the official Kubernetes documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Pod Storage
&lt;/h3&gt;

&lt;p&gt;Kubernetes manages Pod storage using Volumes, Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). Volumes are the Kubernetes abstraction used to manage Pod storage, and support most cloud provider block storage offerings, as well as local storage on the Nodes hosting the running Pods. To see a full list of supported Volume types, consult the Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For example, if your Pod contains two NGINX containers that need to share data between them (say the first, called &lt;code&gt;nginx&lt;/code&gt; serves web pages, and the second, called &lt;code&gt;nginx-sync&lt;/code&gt; fetches the pages from an external location and updates the pages served by the &lt;code&gt;nginx&lt;/code&gt; container), your Pod spec would look something like this (here we use the &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir"&gt;&lt;code&gt;emptyDir&lt;/code&gt;&lt;/a&gt; Volume type):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pod_volume.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: nginx-web
      mountPath: /usr/share/nginx/html

  - name: nginx-sync
    image: nginx-sync
    volumeMounts:
    - name: nginx-web
      mountPath: /web-data

  volumes:
  - name: nginx-web
    emptyDir: {}

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We use a &lt;code&gt;volumeMount&lt;/code&gt; for each container, indicating that we'd like to mount the &lt;code&gt;nginx-web&lt;/code&gt; volume containing the web page files at &lt;code&gt;/usr/share/nginx/html&lt;/code&gt; in the &lt;code&gt;nginx&lt;/code&gt; container and at &lt;code&gt;/web-data&lt;/code&gt; in the &lt;code&gt;nginx-sync&lt;/code&gt; container. We also define a &lt;code&gt;volume&lt;/code&gt; called &lt;code&gt;nginx-web&lt;/code&gt; of type &lt;code&gt;emptyDir&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In a similar fashion, you can configure Pod storage using cloud block storage products by modifying the &lt;code&gt;volume&lt;/code&gt; type from &lt;code&gt;emptyDir&lt;/code&gt; to the relevant cloud storage volume type.&lt;/p&gt;

&lt;p&gt;The lifecycle of a Volume is tied to the lifecycle of the Pod, but &lt;em&gt;not&lt;/em&gt; to that of a container. If a container within a Pod dies, the Volume persists and the newly launched container will be able to mount the same Volume and access its data. When a Pod gets restarted or dies, so do its Volumes, although if the Volumes consist of cloud block storage, they will simply be unmounted with data still accessible by future Pods.&lt;/p&gt;

&lt;p&gt;To preserve data across Pod restarts and updates, the PersistentVolume (PV) and PersistentVolumeClaim (PVC) objects must be used.&lt;/p&gt;

&lt;p&gt;PersistentVolumes are abstractions representing pieces of persistent storage like cloud block storage volumes or NFS storage. They are created separately from PersistentVolumeClaims, which are demands for pieces of storage by developers. In their Pod configurations, developers request persistent storage using PVCs, which Kubernetes matches with available PV Volumes (if using cloud block storage, Kubernetes can dynamically create PersistentVolumes when PersistentVolumeClaims are created).&lt;/p&gt;

&lt;p&gt;If your application requires one persistent volume per replica, which is the case with many databases, you should not use Deployments but use the StatefulSet controller, which is designed for apps that require stable network identifiers, stable persistent storage, and ordering guarantees. Deployments should be used for stateless applications, and if you define a PersistentVolumeClaim for use in a Deployment configuration, that PVC will be shared by all the Deployment's replicas.&lt;/p&gt;

&lt;p&gt;To learn more about the StatefulSet controller, consult the Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/"&gt;documentation&lt;/a&gt;. To learn more about PersistentVolumes and PersistentVolume claims, consult the Kubernetes storage &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Injecting Configuration Data with Kubernetes
&lt;/h3&gt;

&lt;p&gt;Similar to Docker, Kubernetes provides the &lt;code&gt;env&lt;/code&gt; and &lt;code&gt;envFrom&lt;/code&gt; fields for setting environment variables in Pod configuration files. Here's a sample snippet from a Pod configuration file that sets the &lt;code&gt;HOSTNAME&lt;/code&gt; environment variable in the running Pod to &lt;code&gt;my_hostname&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;sample_pod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        env:
        - name: HOSTNAME
          value: my_hostname
...

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This allows you to move configuration out of Dockerfiles and into Pod and Deployment configuration files. A key advantage of further externalizing configuration from your Dockerfiles is that you can now modify these Kubernetes workload configurations (say, by changing the &lt;code&gt;HOSTNAME&lt;/code&gt; value to &lt;code&gt;my_hostname_2&lt;/code&gt;) separately from your application container definitions. Once you modify the Pod configuration file, you can then redeploy the Pod using its new environment, while the underlying container image (defined via its Dockerfile) does not need to be rebuilt, tested, and pushed to a repository. You can also version these Pod and Deployment configurations separately from your Dockerfiles, allowing you to quickly detect breaking changes and further separate config issues from application bugs.&lt;/p&gt;

&lt;p&gt;Kubernetes provides another construct for further externalizing and managing configuration data: ConfigMaps and Secrets.&lt;/p&gt;

&lt;h3&gt;
  
  
  ConfigMaps and Secrets
&lt;/h3&gt;

&lt;p&gt;ConfigMaps allow you to save configuration data as objects that you then reference in your Pod and Deployment configuration files, so that you can avoid hardcoding configuration data and reuse it across Pods and Deployments.&lt;/p&gt;

&lt;p&gt;Here's an example, using the Pod config from above. We'll first save the &lt;code&gt;HOSTNAME&lt;/code&gt; environment variable as a ConfigMap, and then reference it in the Pod config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create configmap hostname --from-literal=HOSTNAME=my_host_name

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To reference it from the Pod configuration file, we use the the &lt;code&gt;valueFrom&lt;/code&gt; and &lt;code&gt;configMapKeyRef&lt;/code&gt; constructs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;sample_pod_configmap.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        env:
        - name: HOSTNAME
          valueFrom:
            configMapKeyRef:
              name: hostname
              key: HOSTNAME
...

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So the &lt;code&gt;HOSTNAME&lt;/code&gt; environment variable's value has been completely externalized from configuration files. We can then update these variables across all Deployments and Pods referencing them, and restart the Pods for the changes to take effect.&lt;/p&gt;

&lt;p&gt;If your applications use configuration files, ConfigMaps additionally allow you to store these files as ConfigMap objects (using the &lt;code&gt;--from-file&lt;/code&gt; flag), which you can then mount into containers as configuration files.&lt;/p&gt;

&lt;p&gt;Secrets provide the same essential functionality as ConfigMaps, but should be used for sensitive data like database credentials as the values are base64-encoded.&lt;/p&gt;

&lt;p&gt;To learn more about ConfigMaps and Secrets consult the Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/configuration/"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Services
&lt;/h3&gt;

&lt;p&gt;Once you have your application up and running in Kubernetes, every Pod will be assigned an (internal) IP address, shared by its containers. If one of these Pods is removed or dies, newly started Pods will be assigned different IP addresses.&lt;/p&gt;

&lt;p&gt;For long-running services that expose functionality to internal and/or external clients, you may wish to grant a set of Pods performing the same function (or Deployment) a stable IP address that load balances requests across its containers. You can do this using a Kubernetes Service.&lt;/p&gt;

&lt;p&gt;Kubernetes Services have 4 types, specified by the &lt;code&gt;type&lt;/code&gt; field in the Service configuration file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ClusterIP&lt;/code&gt;: This is the default type, which grants the Service a stable internal IP accessible from anywhere inside of the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NodePort&lt;/code&gt;: This will expose your Service on each Node at a static port, between 30000-32767 by default. When a request hits a Node at its Node IP address and the &lt;code&gt;NodePort&lt;/code&gt; for your service, the request will be load balanced and routed to the application containers for your service.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LoadBalancer&lt;/code&gt;: This will create a load balancer using your cloud provider's load balancing product, and configure a &lt;code&gt;NodePort&lt;/code&gt; and &lt;code&gt;ClusterIP&lt;/code&gt; for your Service to which external requests will be routed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ExternalName&lt;/code&gt;: This Service type allows you to map a Kubernetes Service to a DNS record. It can be used for accessing external services from your Pods using Kubernetes DNS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that creating a Service of type &lt;code&gt;LoadBalancer&lt;/code&gt; for each Deployment running in your cluster will create a new cloud load balancer for each Service, which can become costly. To manage routing external requests to multiple services using a single load balancer, you can use an Ingress Controller. Ingress Controllers are beyond the scope of this article, but to learn more about them you can consult the Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;documentation&lt;/a&gt;. A popular simple Ingress Controller is the &lt;a href="https://github.com/kubernetes/ingress-nginx"&gt;NGINX Ingress Controller&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here’s a simple Service configuration file for the Flask example used in the Pods and Deployments &lt;a href="https://www.digitalocean.com/community/tutorials/modernizing-applications-for-kubernetes#write-deployment-and-pod-configuration-files"&gt;section&lt;/a&gt; of this guide:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;flask_app_svc.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: flask-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: flask-app
  type: LoadBalancer

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here we choose to expose the &lt;code&gt;flask-app&lt;/code&gt; Deployment using this &lt;code&gt;flask-svc&lt;/code&gt; Service. We create a cloud load balancer to route traffic from load balancer port &lt;code&gt;80&lt;/code&gt; to exposed container port &lt;code&gt;8080&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about Kubernetes Services, consult the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/"&gt;Services&lt;/a&gt; section of the Kubernetes docs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging and Monitoring
&lt;/h3&gt;

&lt;p&gt;Parsing through individual container and Pod logs using &lt;code&gt;kubectl logs&lt;/code&gt; and &lt;code&gt;docker logs&lt;/code&gt; can get tedious as the number of running applications grows. To help you debug application or cluster issues, you should implement centralized logging. At a high level, this consists of agents running on all the worker nodes that process Pod log files and streams, enrich them with metadata, and forward the logs off to a backend like &lt;a href="https://github.com/elastic/elasticsearch"&gt;Elasticsearch&lt;/a&gt;. From there, log data can be visualized, filtered, and organized using a visualization tool like &lt;a href="https://github.com/elastic/kibana"&gt;Kibana&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the container-level logging section, we discussed the recommended Kubernetes approach of having applications in containers log to the stdout/stderr streams. We also briefly discussed logging sidecar containers that can grant you more flexibility when logging from your application. You could also run logging agents directly in your Pods that capture local log data and forward them directly to your logging backend. Each approach has its pros and cons, and resource utilization tradeoffs (for example, running a logging agent container inside of each Pod can become resource-intensive and quickly overwhelm your logging backend). To learn more about different logging architectures and their tradeoffs, consult the Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In a standard setup, each Node runs a logging agent like &lt;a href="https://www.elastic.co/products/beats/filebeat"&gt;Filebeat&lt;/a&gt; or &lt;a href="https://github.com/fluent/fluentd"&gt;Fluentd&lt;/a&gt; that picks up container logs created by Kubernetes. Recall that Kubernetes creates JSON log files for containers on the Node (in most installations these can be found at &lt;code&gt;/var/lib/docker/containers/&lt;/code&gt;). These should be rotated using a tool like logrotate. The Node logging agent should be run as a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/"&gt;DaemonSet Controller&lt;/a&gt;, a type of Kubernetes Workload that ensures that every Node runs a copy of the DaemonSet Pod. In this case the Pod would contain the logging agent and its configuration, which processes logs from files and directories mounted into the logging DaemonSet Pod.&lt;/p&gt;

&lt;p&gt;Similar to the bottleneck in using &lt;code&gt;kubectl logs&lt;/code&gt; to debug container issues, eventually you may need to consider a more robust option than simply using &lt;code&gt;kubectl top&lt;/code&gt; and the Kubernetes Dashboard to monitor Pod resource usage on your cluster. Cluster and application-level monitoring can be set up using the &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; monitoring system and time-series database, and &lt;a href="https://github.com/grafana/grafana"&gt;Grafana&lt;/a&gt; metrics dashboard. Prometheus works using a "pull" model, which scrapes HTTP endpoints (like &lt;code&gt;/metrics/cadvisor&lt;/code&gt; on the Nodes, or the &lt;code&gt;/metrics&lt;/code&gt; application REST API endpoints) periodically for metric data, which it then processes and stores. This data can then be analyzed and visualized using Grafana dashboard. Prometheus and Grafana can be launched into a Kubernetes cluster like any other Deployment and Service.&lt;/p&gt;

&lt;p&gt;For added resiliency, you may wish to run your logging and monitoring infrastructure on a separate Kubernetes cluster, or using external logging and metrics services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Migrating and modernizing an application so that it can efficiently run in a Kubernetes cluster often involves non-trivial amounts of planning and architecting of software and infrastructure changes. Once implemented, these changes allow service owners to continuously deploy new versions of their apps and easily scale them as necessary, with minimal amounts of manual intervention. Steps like externalizing configuration from your app, setting up proper logging and metrics publishing, and configuring health checks allow you to fully take advantage of the Cloud Native paradigm that Kubernetes has been designed around. By building portable containers and managing them using Kubernetes objects like Deployments and Services, you can fully use your available compute infrastructure and development resources.&lt;/p&gt;




&lt;p&gt;&lt;a href="http://creativecommons.org/licenses/by-nc-sa/4.0/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jgdiKbjy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" alt="CC 4.0 License"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This work is licensed under a &lt;a href="http://creativecommons.org/licenses/by-nc-sa/4.0/"&gt;Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>conceptual</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
